Validation of compiler and IDE - Why, when and how to? - Part 1
Validating the compiler used in software development is a recurring issue. To what extent a compiler should be validated, when, how and why?
In the same vein, we can extend the question of validation to all tools used in the software development environment: integrated development environment, configuration management tools, compiler (and linker), automated test tools.
Edit June 2016: this article remains relevant with the new requirements on software validation found in ISO 13485:2016.
Class of medical device
If you're in class III FDA or in class III CE mark or in Class C IEC 62304, you have to do it thoroughly if a flaw in a development tool represents an unacceptable risk!
If you're in class II FDA or in class IIa or IIb CE mark or in class B IEC 62304, you may do it but it's far from being mandatory!
If you're in class I FDA or in Class I (even class IIa) CE mark or in class A IEC 62304, do it if you have spare time!
In other words, thorough development tool validation, and especially compiler validation, are only relevant for very, very, very, critical software.
Edit June 2016: following the risk-based approach found in ISO 13485:2016, this rationale remains relevant.
Perhaps it makes sense for a small subset of embedded sw used in class III MD, like pacemakers. Likewise it makes senses in automotive or airborne systems where sw failure equals dozens of casualties.
Development tools are low risk
The main rationale not to validate development tools is to consider them as low-risk software. Hence if there is a bug in one of these tools, then the software built will be buggey and odds are pretty good that this bug will be discovered during software tests (be it unit, integration, or functional).
Here are some examples of bugs in development tools:
- My IDE has a bug in the code editor and doesn't save source files in specific conditions. I'm going to see it quickly! Or I'm going to see it in the code of a colleague during a code review.
- My source control tool has a bug in the graphical merge function. I'm going to see it quickly as well!
- My compiler doesn't cast the right way a floating point value to an integer value, under certain circumstances. I'm not going to see it quickly. But I'm probably going to see it during tests, with inconsistent computed values.
All in all, tests in the software development process are here to find problems created in early stages of the process. Most of these problems are created by humans (we can't think of everything), and some are created by the tools we use (the guys who created the tools couldn't think of everything).
Process risk assessment
What is shown above is assessing the risks of the software development process.
In class I, there is no use to validate thoroughly development tools hence the hazardous situations created by these tools have low severity (the final software is class A of IEC 62304) or have low probability (bugs created by these tools are fixed during code reviews or tests).
In class II or III, it's useful to validate these tools hence the hazardous situations created by these tools (namely bug in built software) have high severity, or high probability (think of the 100% probability in software hazards).
How to validate these tools
If you have to validate these tools, you may take examples from this guidance: AAMI TIR 36 Validation of software for regulated processes. It has pretty good examples (excepted compilers, see below).
You may also get your inspiration from GAMP5 about computerized systems (pull out your credit card if you want to read it!).
If you don't want to buy any of these documents, there are plenty of examples available on the internet. You just have to seek for IQ/OQ/PQ plans and reports.
Basically the goal of a validation plan is a bit like applying the software development process of IEC 62304 with SOUP only:
- Assessing risks of the software development process ,
- Writing requirements of the ideal development tool (including requirements mitigating risks and requirements about the tool vendor),
- No architecture or detailed design, but in place selecting the right tool for your needs (don't forget interoperability with other tools),
- Tests with three levels:
- Installation Qualification (IQ), i.e. ensuring that it is deployed and well configured on the development or integration platform, and verifying that all necessary documentation is available,
- Operational Qualification (OQ), i.e. verifying that it works and integrates well with other development tools, according to written and approved requirements (including requirements mitigating risks),
- Performance Qualification (PQ), i.e. using the development tools in real conditions for a period of time to ensure that the tool and its vendor behave according to expected performances.
If you are in a case where there's no urge to validate software development tools, then just write a document with the rationales that led you to choose these tools.
In all cases, however, it's necessary to have a maintenance plan of the development tools, like what you have about SOUPs in IEC 62304:
- Monitoring published bugs, bugs fixes and new versions,
- Assessing risks related to these bugs,
- Deciding whether it's necessary to install a new version of the development tool.
Development vs production processes
There are dozens of articles or memos or documents about validation of tools used in production processes.
The validation method described above is a bit peculiar because it deals with tools used in software development processes (for software design), not production processes (for production of physical goods or for delivery of standardized services).
Thus it is acceptable only for development processes. For production processes, like automated machines or test benches, this validation plan is too simple.
However,if you want to validate a compiler, this validation plan is a bit incomplete.
Validating a compiler once and for all is a titanic task! We'll see it in the next article.