Software in Medical Devices, by MD101 Consulting - Tag - Software ValidationBlog about software medical devices and their regulatory compliance. Main subjects are software validation, IEC 62304, ISO 13485, ISO 14971, CE mark 93/42 directive and 21 CFR part 820.2024-03-27T15:32:28+01:00Cyrille Michaudurn:md5:9c06172e7cd5ed0f5b192883b657eabbDotclearHow to validate software development tools like Jira or Redmine?urn:md5:316c54e7537f697d2e88cce69102955d2016-07-01T13:22:00+02:002016-07-01T13:22:00+02:00MitchProcessesdevelopment processSoftware Validation<p>Following the discussion on <a href="https://blog.cm-dm.com/post/2016/06/10/ISO/TR-80002-2%3A-lastest-news-on-Validation-of-software-for-medical-device-quality-systems">ISO/TR 80002-2 and AAMI TRI 36 in the previous article</a>, here are some tips on how to validate workflow and data management software like Jira or Redmine.</p> <h4>Why validating?</h4>
<p>First point of view, quality-oriented: the purpose of workflow and data management tools for software development is close to document management tools. Both are designed to record information, which is evidence of application of QMS provisions:</p>
<ul>
<li>Document management tools for all document and records,</li>
<li>Software development workflow tools for artifacts produced during software lifecycle activities.</li>
</ul>
<p>Another point of view, more technical: workflow and data management tools for software development are connected to software configuration management (SCM) tools:</p>
<ul>
<li>SCM tools need to be validated, if your SCM tool doesn't work, you could generate a wrong version, and put patients at risk,</li>
<li>Workflow tools need to be validated, if it doesn't work, you could miss a design review, place a version not validated on the market, and and put patients at risk.</li>
</ul>
<h4>When validating</h4>
<p>The probability of such risks is in most cases very low, thus the effort of validation should be proportionate to these risks.<br />
Depending on your context, you could even conclude that the validation is not mandatory. For example:</p>
<ul>
<li>If design reviews are managed outside the tool,</li>
<li>If the software documentation is extracted from the tool in PDF files, which are formally reviewed,</li>
<li>If the tool is only used to manage developers daily tasks, without formal link to software documentation.</li>
</ul>
<p>Anyway, the rationale to validate or not, and the effort of validation shall be recorded in a document (see below risk assessment).</p>
<h4>How validating</h4>
<p>Software workflow management tools are off-the-shelf tools but highly configurable.<br />
The validation can be managed in four steps:</p>
<ul>
<li>The definition of your requirements,</li>
<li>The risk assessment on your software development process,</li>
<li>The qualification of the tool vendor,</li>
<li>The qualification of the tool itself configured for your needs.</li>
</ul>
<h5>Requirements</h5>
<p>First and for all, the user requirements shall guide the validation process.<br />
The best way to do it is to write a statement of work or a software requirement specification document containing the user needs. Such document should be written by software team in collaboration with quality team. It should also be based on your software development process.</p>
<h5>Risk assessment</h5>
<p>Once you've described the requirements, you can do a risk analysis on the software tool and its environment. For example:</p>
<ul>
<li>Risk of software failure of the tool,</li>
<li>Risk of insufficient training of users,</li>
<li>Risk of failure of the vendor.</li>
</ul>
<p>If all identified risks are deemed acceptable you could stop here the validation process. Otherwise you continue the process.</p>
<h5>Qualification of the vendor</h5>
<p>Applying a purchase procedure is probably the most straightforward solution. However the general procedure could lack specific selection and evaluation criteria for that kind of supplier. You can define criteria on:</p>
<ul>
<li>Software tools functions, based on the software requirements,</li>
<li>Tool vendor capabilities to ensure support on the tool when installing, configuring and maintaining the tool.</li>
</ul>
<p>Your purchase department will add criteria on vendor pricing policy :-) If there is only one tool matching your needs (Jira is Jira, you know what I mean) then it's not necessary to spend too much time on the vendor selection.</p>
<h5>IQ/OQ/PQ</h5>
<p>When you have selected the tool and its vendor, it is time to qualify the tool itself.<br />
A very classical IQ/OQ/PQ process is the best way to organize the qualification process. This is not the way it is presented in AAMI TIR 36 (see previous article) but auditors or inspectors are used to see such qualification steps.<br />
For software, all these steps can be gathered in a single software test plan, with:</p>
<ul>
<li>Installation Qualification: tests by inspection that the tool is correctly installed and personnel is trained,</li>
<li>Operational Qualification: tests cases on tools functions, with traceability between test cases and the statement of work,</li>
<li>Performance Qualification: a kind of beta test phase, when software is used during one/two months/years (choose your period).</li>
</ul>
<h4>When revalidating</h4>
<p>The characteristic of such software is its perpetual enhancement to match new user needs. The tool is subject to changes of configuration, minor or major, to adapt the workflow to the real software development process.<br />
You will need to assess the impact of every change of configuration of the workflows implemented in the tool. It may be wise to wait a bit to have a minimum set of changes to implement. Validation of changes can be expensive, with a possible update to all records of validation.</p>
<h4>Conclusion</h4>
<p>Validating tools for the management of software development workflow is not rocket science. It's based on a risk assessment on the process where this tool is used, a good set of requirements, including non-software requirements, and a test plan to verify these requirements in the deployed software version.<br />
<br />
<br />
<br />
If you don't want to start from a white page, see the QMS software validation templates in the <a href="https://blog.cm-dm.com/pages/Software-Development-Process-templates">Template Repository for software</a>: Software Validation Plan, Software Validation Protocol and Software Validation Report.</p>https://blog.cm-dm.com/post/2016/07/01/How-to-validate-software-development-tools-like-Jira-or-Redmine#comment-formhttps://blog.cm-dm.com/feed/atom/comments/192IEC 82304-1 - Consequences on agile software development processesurn:md5:8c306de984f1465a149933082434e96a2016-04-08T14:25:00+02:002016-04-09T10:38:26+02:00MitchStandardsAgiledevelopment processIEC 62304IEC 82304Software Validation<p>Continuing our <a href="https://blog.cm-dm.com/post/2016/01/15/IEC-82304-1-latest-news-about-the-standard-on-Health-Software">series about IEC 82304-1</a>, let's see the consequences of this standard on agile software development processes.</p> <h4>IEC 82304-1 in software development cycle</h4>
<p>The most simple presentation of the position of IEC 82304-1 in the software development lifecycle is to use the traditional waterfall process:
<img src="https://blog.cm-dm.com/public/22-IEC-82304-1/.scope_of_IEC_82304-1_in_lifecycle_m.png" alt="scope_of_IEC_82304-1_in_lifecycle.png" style="display:table; margin:0 auto;" title="scope_of_IEC_82304-1_in_lifecycle.png, Dec 2015" />
But it is also applicable with agile methods, like suggested in the following graph:
<img src="https://blog.cm-dm.com/public/22-IEC-82304-1/.IEC_82304-1_with_agile_methods_m.png" alt="IEC_82304-1_with_agile_methods.png" style="display:table; margin:0 auto;" title="IEC_82304-1_with_agile_methods.png, Jan 2016" />
In the example, we still have a separation between the system level, where use requirements and system requirements don't change when a version is being developed, and the software level, where agile cycles (sprints or something else) are performed.<br /></p>
<h4>Continuous user input</h4>
<p>But we can go further in the representation, by considering that user feedback (the input at the top-left corner of the previous graph), continuously changes. In this case, use requirements and system requirements are treated at sprint level. But not all sprints will contain new and/or modified use requirements and/or system requirements. And a validated version won't be released at the end of each sprint.
<img src="https://blog.cm-dm.com/public/22-IEC-82304-1/.IEC_82304-1_with_agile_methods_and_continuous_user_input_m.png" alt="IEC_82304-1_with_agile_methods_and_continuous_user_input.png" style="display:table; margin:0 auto;" title="IEC_82304-1_with_agile_methods_and_continuous_user_input.png, Jan 2016" />
The minor difference on the graphic compared to the previous one: continuous user input, makes a major difference for the agile development process. Thus the user input is not frozen at system level and evolves continuously.<br />
Practically speaking, the system level requirements are not given in a document, like a statement of work. They are continuously given by the users, when they are provided with demo or stable releases, and are added to the backlog.</p>
<h4>Agile at system and software level</h4>
<p>It's difficult for a medical device manufacturer to be agile both at system and software level. Having user requirements changing continuously can be troublesome.<br />
Usually, user feedback doesn't change the intended use of health software. But sometimes end-users have tons of ideas... especially when they are doctors! Being fully agile means that it could be possible that the device description or even the intended use could change! This is a situation that manufacturers don't want to encounter.<br />
That's why agile methods assign the role of product owner to someone, who acts as a filter between users' vows and what goes into the backlog.<br /></p>
<h5>Static immutable requirements</h5>
<p>Coming back to IEC 82304-1, it means that some of the user and system requirements won't change, that they set the basis of the software development project. These user and system requirements are a <strong>subset</strong> of what is found in sections 4.2 and 4.5 of IEC 82304-1. We can quote some of these types of requirements, for which we are 99% sure they won't change during the project:</p>
<ul>
<li>Intended use,</li>
<li>Addressed users or patients,</li>
<li>Main use case,</li>
<li>General system security,</li>
<li>Regulatory requirements (hum, not so sure :-).</li>
</ul>
<p>Using software development vocabulary, we can speak of static immutable requirements. All other requirements are dynamic and mutable.<br />
They can be added, deleted and changed from one sprint to another. They can be at system level i.e. other requirements found in IEC 82304-1 or at software level i.e. requirements found in section 5.2 of IEC 62304.</p>
<h5>Releasing health software</h5>
<p>Since many things can change from one sprint to another, it's not possible to have a stable release after each sprint. You won't put software in the hands of anyone after each sprint. Depending on its status you can determine to whom it will be released:</p>
<ul>
<li>Is it demo-able?
<ul>
<li>No (pliz, agile aficionados, don't frown, c'est la vie!), next sprint,</li>
<li>Yes, do a demo to the product owner,</li>
</ul></li>
<li>Is it stable?
<ul>
<li>No, next sprint,</li>
<li>Yes, consider putting it in the hands of experienced users,</li>
</ul></li>
<li>Is it ready for validation?
<ul>
<li>No, next sprint,</li>
<li>Yes, do a validation.</li>
</ul></li>
</ul>
<p><em>Ready for validation</em> means that its functional scope <em>probably</em> matches the intended use.</p>
<h5>Validating health software</h5>
<p><em>Probably matches the intended use</em> is not allowed for software on the market. A validation step is required to ensure that software does match the intended use.<br />
A formal step of validation remains mandatory for software qualified as medical device. IEC 82304-1 helps manufacturers define what should be validated in section 6 of the standard. As already mentioned, this is a major breakthrough of this standard.<br />
In the context of agile methods, the standard leaves the validation method to the choice of the manufacturer. IEC 82304-1 requires at section 6.2 to write a validation plan, which contains the validation methods. Hopefully, it doesn't impose a methods. Thus, the validation method can be:</p>
<ul>
<li>Either a formal step of validation, performed after a release candidate is delivered by the software team,</li>
<li>Or a continuous validation included in the sprints.</li>
</ul>
<p>A quick search in the scientific literature about software development, or in developers forums, shows that nobody performs a continuous validation of medical device software. It is possible, but it hasn't been implemented (or disclosed) yet.</p>
<h4>Conclusion</h4>
<p>To conclude this series of articles about IEC 82304-1, the key points are:</p>
<ul>
<li>Its scope is standalone health software,</li>
<li>It calls IEC 62304 for software development, and thus requires ISO 14971,</li>
<li>It defines types of requirements to document at user and system level,</li>
<li>It defines requirements on health software validation,</li>
<li>It will be probably recognized by the FDA and harmonized by the EU when it is published,</li>
<li>It is portable to the framework of agile methods.</li>
</ul>https://blog.cm-dm.com/post/2016/04/08/IEC-82304-1-Consequences-on-agile-software-development-processes#comment-formhttps://blog.cm-dm.com/feed/atom/comments/183IEC 82304-1 - Overview of requirementsurn:md5:f2a00c175ef60844410e2d8403584ee02016-03-11T14:53:00+01:002016-04-09T12:50:39+02:00MitchStandardsIEC 62304IEC 62366IEC 82304ISO 14971Software Validation<p>We had in <a href="https://blog.cm-dm.com/post/2016/01/15/IEC-82304-1-latest-news-about-the-standard-on-Health-Software">a previous article</a> an overview of IEC 82304-1 <em>Health software -- Part 1: General requirements for product safety</em>, its scope and its relationships with other standards like IEC 62304.<br />
This article presents more in details (but not too much, we're not going to rephrase the standard) the requirements of IEC 82304-1.</p> <p>Let's see again the graphic representing the relationships of IEC 82304-1 with other standards.
<img src="https://blog.cm-dm.com/public/22-IEC-82304-1/.relationship_of_IEC_82304-1_with_other_standards_m.png" alt="relationship_of_IEC_82304-1_with_other_standards.png" style="display:table; margin:0 auto;" title="relationship_of_IEC_82304-1_with_other_standards.png, Dec 2015" />
It requires IEC 62304 but not ISO 14971.<br />
How is it possible and what are the consequences?</p>
<h4>References to IEC 62304</h4>
<p>IEC 82304-1 references several times IEC 62304 in the requirements or in the notes.<br />
The main reference is in the section 5 of IEC 82304-1 titled <em>HEALTH SOFTWARE - software lifecycle process</em>:</p>
<ul>
<li>It references sections 4.2, 4.3, 5, 6, 7, 8 and 9 of IEC 62304,</li>
<li>It doesn't reference the section 4.1 (the section about QMS) of IEC 62304,</li>
<li>It doesn't reference the section 1 either (and particularly section 1.4 about the compliance with IEC 62304).</li>
</ul>
<p>Anyway, given the small subset of requirements not applicable, all the more those with little consequences on software development and maintenance, take it for granted that you have to apply IEC 62304.<br />
<br />
<strong>Manufacturers of health software, welcome to the marvelous world of IEC 62304 and its costly requirements!</strong></p>
<h4>References to ISO 14971</h4>
<p>IEC 82304-1 doesn't reference ISO 14971 as a mandatory standard in its requirements. It only requires to:</p>
<ul>
<li>perform a risk assessment at system level in section 4.1 <em>Initial RISK ASSESSMENT of HEALTH SOFTWARE PRODUCT</em>, and</li>
<li>define risk mitigation actions in the section 4.5 <em>system requirements</em>.<br /></li>
</ul>
<p>But the note at the bottom of section 4.1 suggests to apply the first steps of ISO 14971 to do this initial risk assessment. These first steps are those found in section 4 of ISO 14971.<br />
<br />
Guess what?<br />
Are you going to apply a risk management process different from the one described in ISO 14971?<br />
No, this would add complexity to risk management, which is complex enough like that!<br />
<br />
<strong>So, You are going to apply the first steps of ISO 14971 to do this initial risk assessment!</strong></p>
<h5>Logical approach</h5>
<p>But this approach remains logical.<br />
IEC 82304-1 requires a preliminary risk assessment at system level, when requirements are perhaps not yet defined (section 4.1 is the very first section of the standard with requirements about health software). Reading between the lines, it says:</p>
<ul>
<li>do an initial (or preliminary call it as you want) risk assessment to know where you're going: critical software or risk-free software,</li>
<li>then write your system requirements with regards to this initial risk assessment.</li>
</ul>
<h5>Consequences at system level</h5>
<p>Mimiking the management of software risks of IEC 62304, IEC 82304-1 requires to define mitigations actions in the form of system requirements (software requirements for IEC 62304).<br />
IEC 82304-1 also requires to validate the health software product. Thus the validation plan shall contain tests, which bring evidence that the mitigation actions are in place and effective (like software system tests for IEC 62304).<br /></p>
<h5>Types of risks</h5>
<p>Risks found and mitigated at system level will include a larger set of hazardous situations, that are not addressed <strong>explicitly</strong> by IEC 62304, focused on software:</p>
<ul>
<li>Instructions for use,</li>
<li>Labelling,</li>
<li>IT systems (detailed in section 7.2.3.2 of IEC 82304-1).</li>
</ul>
<p>It doesn't come immediately to the mind of the reader of IEC 62304 to think about these types of risks. Software risk management is usually focused on software failure, not the aforementioned types of risks.<br />
<br />
For those who know IEC 60601-1, we see here once again that IEC 82304-1 takes the same role as IEC 60601-1, but for standalone software.</p>
<h5>ISO 14971 is mandatory</h5>
<p>Strictly speaking, the mandatory nature of ISO 14971 is only given by the requirements in the section 4.2 of IEC 62304.<br />
Section 4.2 of IEC 62304 is called by section 5 of IEC 82304-1. Thus ISO 14971 is mandatory for the software lifecycle processes. No surprise here, ISO 14971 is really the gold standard for patient risk management.<br />
It looks more consistent to apply the risk management process of ISO 14971, throughout the full lifecycle of health software. But IEC 82304-1 leaves the door open for other risk management process at system level.</p>
<h5>Exemption for 3rd party software</h5>
<p>IEC 82304-1 leaves the door open also at software level. The section 5 of IEC 82304-1 authorizes the health software manufacturer to apply partially the ISO 14971 risk management process when health software contains third party software.<br />
This looks pretty much like the SOUP concept of IEC 62304, since IEC 82304-1 requires to analyze residual risks for this third party software and implement mitigation actions if risks are unacceptable.</p>
<h4>References to IEC 62366-1</h4>
<p>IEC 82304-1 requires in section 4.2 to define the <em>HEALTH SOFTWARE PRODUCT use requirements</em>. But it doesn't require to apply IEC 62366-1 (nor the previous version IEC 62366:2008). IEC 62366-1 is only quoted in a note at the bottom of the section, as a potential source of information.<br />
But, for health software qualified as medical device, IEC 62366-1 is already recognized by the regulation authorities (only by the FDA, at the time of writing). Thus even if IEC 62366-1 is not referenced, manufacturers of standalone software medical device will apply IEC 62366-1.<br />
<br />
Anyway, for health software NOT qualified as medical device IEC 62366-1 is absolutely not required.</p>
<h4>System-level requirements</h4>
<p>The references of IEC 82304-1 to other standards makes a framework, which allows to reuse the state-of-the-art in the software development, maintenance and risk management processes. But the novelty of the standard resides in the specific sections dealing with the system level.<br />
The clauses of IEC 82304-1 aim to define a framework for the design of the standalone software system:</p>
<ul>
<li>Use requirements: the top-most requirements,</li>
<li>System requirements: technical requirements consistent with use requirements,</li>
<li>Validation: how to validate the use requirements.</li>
</ul>
<p>They also aim to define a framework for user documentation and labeling, with clauses about:</p>
<ul>
<li>Identification,</li>
<li>Accompanying documents,</li>
<li>Content of accompanying documents, especially for the system administrator.</li>
</ul>
<p>And they finally aim to define a framework for post-market activities:</p>
<ul>
<li>Software maintenance,</li>
<li>Revalidation,</li>
<li>Communication with interested parties,</li>
<li>Decommissioning and disposal.</li>
</ul>
<h4>Health Software Validation</h4>
<p>The best improvement of IEC 82304-1 is its intent to fill the big gap between the software verification of IEC 62304 and the software validation requirements of regulations, when health software is regulated as a medical device.<br />
Validation is performed versus a set of top-level requirements. That's why IEC 82340-1 contains clauses about use requirements, system requirements, and accompanying documents.<br />
<br />
Validation is not a step, this is a journey. Health software product shall be maintained validated during its whole life on the field. That's why IEC 82304-1 contains clauses about post-market activities.<br /></p>
<h4>Conclusion</h4>
<p>IEC 82304-1 fills the gap between IEC 62304 and software medical device validation required by regulations. To do so, it contains a minimum set of clauses defining what is needed at system level, and it references existing state-of-the-art standards (ISO 14971 and IEC 62304) for the software level.<br />
This approach makes IEC 82304-1 relatively short, compared to IEC 62304, ISO 14971, and IEC 62366-1. But its content is essential, to ensure health software products are correctly maintained in a validated state.<br />
<br />
<br />
<a href="https://blog.cm-dm.com/post/2016/04/08/IEC-82304-1-Consequences-on-agile-software-development-processes">Next Article</a> is on the application of IEC 82304-1 with agile methods.</p>https://blog.cm-dm.com/post/2016/03/11/IEC-82304-1-Overview-of-requirements#comment-formhttps://blog.cm-dm.com/feed/atom/comments/179IEC 82304-1 - latest news about the standard on Health Softwareurn:md5:d4d8fc396a6bed4f074197a72573b42a2016-01-15T14:30:00+01:002016-06-23T20:01:45+02:00MitchStandardsCE MarkFDAIEC 62304IEC 82304ISO 14971Software Validation<p>IEC 82304-1 <em>Health software -- Part 1: General requirements for product safety</em> standard is still under development. Its status is visible on the <a href="http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=59543">page of ISO website, dedicated to IEC 82304-1</a>. There is even a preview of the first three pages of this draft standard.</p> <p>The last draft of IEC 82304-1 was published for comments in July 2015. It is a DIS (draft international standard) and should pass a step early 2016, by being accepted by the drafting committee. It means that a FDIS (final draft international standard) should be published in S1 2016. If it is accepted, the final version should be published by the end of 2016.</p>
<h4>What is the scope of IEC 82304-1?</h4>
<p>The scope of IEC 82304-1 intersects the scope of IEC 62304 but is not identical. It includes different types of software and different steps of the software lifecycle.
IEC 82304-1 deals with health software. The definition of health software is given in the section 3.6 of the standard:</p>
<blockquote><p>HEALTH SOFTWARE<br />
software intended to be used specifically for maintaining or improving health of individual persons, or the delivery of care.</p>
<p></p></blockquote>
<p>It is completed with the definition of health software product in the section 3.7 of the standard:</p>
<blockquote><p>HEALTH SOFTWARE PRODUCT<br />
combination of HEALTH SOFTWARE and ACCOMPANYING DOCUMENTS</p>
<p></p></blockquote>
<p>The definition of medical device software, given at section 3.x of IEC 62304-2015 is different from the definition of health software:</p>
<blockquote><p>MEDICAL DEVICE SOFTWARE<br />
SOFTWARE SYSTEM that has been developed for the purpose of being incorporated into the MEDICAL DEVICE being developed or that is intended for use as a MEDICAL DEVICE<br />
NOTE: This includes a MEDICAL DEVICE software product, which then is a MEDICAL DEVICE in its own right.</p>
<p></p></blockquote>
<p>The definition of SOFTWARE PRODUCT, which was used in IEC 62304:2006, was removed from IEC 62304:2015. We now have the definition of HEALTH SOFTWARE PRODUCT in IEC 82304-1. This is one proof, amongst others, to make IEC 82304-1 and IEC 62304 a two-standard team.</p>
<h4>Types of software</h4>
<h5>Types of software regarding the medical intended use</h5>
<p>The first main difference between both definitions is the intended use. IEC 62304 deals only with software with medical intended use, whereas IEC 82304-1 deals with any kind of software, which directly or indirectly has an effect on health.<br />
The scope of IEC 82304-1 is broader than the scope of IEC 62304. The following types of software are in the scope of IEC 82304-1 but not IEC 62304:</p>
<ul>
<li>Radiology Information Systems (RIS),</li>
<li>Prescription Management Systems (PMS),</li>
<li>Laboratory Information Management Systems (LIMS),</li>
<li>Mobile Apps, which are not Mobile Medical Apps, according to the <a href="https://blog.cm-dm.com/post/2015/03/20/When-the-FDA-releases-guidances-in-burst-mode">FDA Guidances on this subject</a>,</li>
<li>Software, which are not qualified as medical devices, according to the <a href="https://blog.cm-dm.com/post/2012/02/06/MEDDEV-2.1/6-Guidelines-on-classification-of-standalone-software-released%21">MEDDEV 2.1/6 EU Guidance</a>.</li>
</ul>
<p>Thus IEC 82304-1 includes in its scope standalone software, which are not regulated as medical devices.</p>
<h5>Types of software regarding the platform</h5>
<p>IEC 82304-1 deals only with standalone software. Contrary to IEC 62304, it doesn't deal with software embedded in medical devices or embedded in devices with specific hardware. Only software running on a standard PC, server, tablet, or smartphone with a general purpose Operating System are in the scope of IEC 82304-1.<br />
The graphic below, borrowed from IEC 82304-1, shows the scope of this standard versus the scope of IEC 62304.
<img src="https://blog.cm-dm.com/public/22-IEC-82304-1/.Scope_of_IEC_82304-1_m.png" alt="Scope_of_IEC_82304-1.png" style="display:table; margin:0 auto;" title="Scope_of_IEC_82304-1.png, Dec 2015" />
Note: the rectangle in green is not present in the graphic of the standard. It was added here for clarification to show the scope of IEC 62304.</p>
<h4>Steps of the software lifecycle</h4>
<p>IEC 82304-1 deals with standalone health software product. It defines requirements at the system/product level like:</p>
<ul>
<li>Product Requirements,</li>
<li>Product Validation,</li>
<li>Product Identification and Instructions For Use,</li>
<li>Post-market activities.</li>
</ul>
<p>And it references IEC 62304:2015 for requirements at software level.<br />
IEC 82304-1 kind of takes the place of IEC 60601-1 or IEC 61010 for standalone software. IEC 60601-1 defines requirements at system level for Programmable Electric Medical Systems (PEMS), and references IEC 62304 for the software lifecycle.<br />
<br />
Likewise, IEC 82304-1 defines requirements at system level for health software systems, and references a subset of IEC 62304 for the software lifecycle.<br />
The graphic below sums up this.
<img src="https://blog.cm-dm.com/public/22-IEC-82304-1/.scope_of_IEC_82304-1_in_lifecycle_m.png" alt="scope_of_IEC_82304-1_in_lifecycle.png" style="display:table; margin:0 auto;" title="scope_of_IEC_82304-1_in_lifecycle.png, Dec 2015" />
Consequence: if you want to apply IEC 82304-1 to your software, you have to apply a subset of IEC 62304 at the same time.</p>
<h4>Relationships with other standards</h4>
<p>Another way of putting this standard in the picture, it to draw the relationships of this standard with other standards, like we already did <a href="https://blog.cm-dm.com/post/2013/04/04/IEC-62304-vs-IEC-60601-1-and-IEC-61010">here for IEC 62304</a>.
<img src="https://blog.cm-dm.com/public/22-IEC-82304-1/.relationship_of_IEC_82304-1_with_other_standards_m.png" alt="relationship_of_IEC_82304-1_with_other_standards.png" style="display:table; margin:0 auto;" title="relationship_of_IEC_82304-1_with_other_standards.png, Dec 2015" />
This graphic anticipates a bit what is explained in the next article: ISO 14971 is not required by IEC 82304-1 but is still required by IEC 62304.</p>
<h4>Is it mandatory?</h4>
<h5>Short answer:</h5>
<p>No. A standard is never mandatory, expected in very rare cases, but surely not for health software!<br /></p>
<h5>Not so long answer:</h5>
<p>Standards are never mandatory but when they are recognized by regulation authorities, like the FDA, they become "gold standards" de facto.<br />
So, for standalone software regulated as medical devices (eg. mobile medical apps), it could become recognized by the regulations authorities as soon as the final version is published. It would make it almost mandatory.<br />
But for standalone software NOT regulated as medical devices, since they are out of the scope of regulations authorities, the manufacturers of such software could show very little willingness to implement IEC 82304-1!<br />
<br />
In a nutshell:</p>
<ul>
<li>if you develop standalone software medical devices, be prepared to see IEC 82304-1 recognized by the FDA and harmonized by the European Commission when it is published. Probably not before late 2016,</li>
<li>if you develop standalone health software not qualified as medical device, we don't know what regulation authorities will make of this standard. But odds are low that it will become mandatory<br /></li>
</ul>
<p><br />
<a href="https://blog.cm-dm.com/post/2016/03/11/IEC-82304-1-Overview-of-requirements">Next time</a> we'll see more in details the requirements of this standard.</p>https://blog.cm-dm.com/post/2016/01/15/IEC-82304-1-latest-news-about-the-standard-on-Health-Software#comment-formhttps://blog.cm-dm.com/feed/atom/comments/178Validation of software used in production and QMS - Part 3 Validation Protocol and Reportsurn:md5:454ac87f96e4f76d598958a75fa3e7d62015-08-28T12:54:00+02:002015-08-28T12:54:00+02:00MitchRegulationsCE MarkFDAISO 13485Software Validation<p>We continue this <a href="https://blog.cm-dm.com/post/2015/06/19/Validation-of-software-used-in-production-and-QMS-Part-1-introduction">series on validation of software used in production</a> and QMS with the Validation Protocol and Reports.</p> <p>The Validation Master Plan (VMP) comes with other documents:</p>
<ul>
<li>The <a href="https://blog.cm-dm.com/public/Templates/CSV/2015-VMP-template.docx">Validation Master Plan template</a> itself, it contains general provisions for software validation,</li>
<li>The <a href="https://blog.cm-dm.com/public/Templates/CSV/2015-Validation-Protocol-template.docx">Validation Protocol template</a>, it contains the application of the VMP for a given system,</li>
<li>The <a href="https://blog.cm-dm.com/public/Templates/CSV/2015-Validation-Report-template.docx">Validation Report template</a>, it contains results of the validation protocol for a system,</li>
<li>The <a href="https://blog.cm-dm.com/public/Templates/CSV/2015-Final-Validation-Report-template.docx">Final Validation Report</a>, it contains the conclusion of the validation of a system.</li>
</ul>
<p>I share these templates with the conditions of <a href="https://blog.cm-dm.com/post/2011/11/04/License">CC-BY-NC-ND license</a>.</p>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/fr/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/fr/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/fr/">Creative Commons Attribution-NonCommercial-NoDerivs 3.0 France License</a>.
<br />
<br />
<h4>Content of the Validation Protocol</h4>
<p>The validation protocol is the instanciation of the provisions of the Validation Master Plan (VMP). You will have one validation protocol per system needing validation.<br />
The content of the validation protocol repeats the phases found in the VMP, with specific provisions, if needed.<br />
<br />
For example, a system of low level of concern may have a validation protocol with IQ and PQ only, considering that OQ is not mandatory given the system features. Of course, skipping OQ shall be documented!
<br /></p>
<h4>Content of the Validation Reports</h4>
<p>The validation reports are simply the records of validation protocol.</p>
<h5>Validation Report</h5>
<p>The validation report is filled with data gathered during qualification phases. There may be a single report recording all phases or multiple reports. When qualification phases are long or verbose, having a report per phase is a good option.<br />
The validation report ends with a conclusion about the conformity of the product versus requirements verified during the phase. It's important to keep this part hence it inks the status of the system at the end of the phase. That's also a cultural way of doing this in the world of physical equipment qualification!</p>
<h5>Final Validation Report</h5>
<p>The final validation report contains:</p>
<ul>
<li>The identification of the system that is validated,</li>
<li>The final conclusion about the validation.</li>
</ul>
<p>The identification is important, to be sure which version is validated and who can use what in routine. This is also a favorite way of auditors to search for pitfalls in the identification of the validated system.<br />
The final conclusion is about the compliance of the system versus regulatory requirements. Note the difference between validation reports (compliance to requirements in the scope of the phase) and the final validation report (compliance to regulatory requirements).
<br />
<br />
<br />
That's the end of this <a href="https://blog.cm-dm.com/post/2015/06/19/Validation-of-software-used-in-production-and-QMS-Part-1-introduction">series about computerized systems validations</a>.<br />
With all this templates and explanations, you should be ready to perform you own computerized systems validations. Feel free to ask questions in comments!<br /></p>https://blog.cm-dm.com/post/2015/08/28/Validation-of-software-used-in-production-and-QMS-Part-3-Validation-Protocol-and-Reports#comment-formhttps://blog.cm-dm.com/feed/atom/comments/170Validation of software used in production and QMS - Part 2 Validation Master Planurn:md5:e2e9ba9415c539ce2786e41b6f47b7c22015-07-24T11:57:00+02:002015-09-03T07:43:53+02:00MitchRegulationsCE MarkFDAISO 13485ISO 14971Software Validation<p>We continue this <a href="https://blog.cm-dm.com/post/2015/06/19/Validation-of-software-used-in-production-and-QMS-Part-1-introduction">series on validation of software used in production</a> and QMS with the Validation Master Plan (VMP).<br />
Better than endless explanations, I added a Validation Master Plan template to my <a href="https://blog.cm-dm.com/pages/Software-Development-Process-templates">templates repository page</a>.</p> <p>The Validation Master Plan (VMP) is here: <a href="https://blog.cm-dm.com/public/Templates/CSV/2015-VMP-template.docx">Validation Master Plan template</a>. It contains general provisions for software validation.<br />
<br />
It comes with other documents that we'll see in the next post:</p>
<ul>
<li>The Validation Protocol template, it contains the application of the VMP for a given system,</li>
<li>The Validation Report template, it contains results of the validation protocol for a system,</li>
<li>The Final Validation Report, it contains the conclusion of the validation of a system.</li>
</ul>
<p>I share these templates with the conditions of <a href="https://blog.cm-dm.com/post/2011/11/04/License">CC-BY-NC-ND license</a>.</p>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/fr/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/fr/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/fr/">Creative Commons Attribution-NonCommercial-NoDerivs 3.0 France License</a>.
<br />
<br />
<h4>Content of the VMP</h4>
<p>The Validation Master Plan contains the provisions for:</p>
<ul>
<li>Identifying systems that require validation,</li>
<li>Defining the level of scrutiny of the validation.</li>
</ul>
<h5>Selecting systems</h5>
<p>Not all systems used by a company should be validated. As we already saw <a href="https://blog.cm-dm.com/post/2015/06/19/Validation-of-software-used-in-production-and-QMS-Part-1-introduction">in the previous article</a>, only those in the scope of requirements found in applicable regulations and standards shall be validated.<br />
The VMP template gives hints to define the selection criteria and to present the results of the selection.</p>
<h5>Level of concern</h5>
<p>The VMP template introduces the concept of "level of concern", to help validation team define the steps required by validation.<br />
The level of concern is borrowed from the FDA concept found in its guidances on medical device software. It is here adapted to the context of computerized system validation.</p>
<h5>Validation steps</h5>
<p>The validation steps are the very classical ones found in every validation protocol:</p>
<ul>
<li>Design Qualification (DQ),</li>
<li>Installation Qualification (IQ),</li>
<li>Operations Qualification (OQ),</li>
<li>Performance Qualification (PQ).</li>
</ul>
<p>These concepts don't mach well those found in software validation. But some links can be drawn between them.</p>
<h5>Design Qualification</h5>
<p>Design qualification is applicable only to a subset of selected systems. DQ is applicable when software is internally developed or when its configuration is complex, with scripting and the like. See VMP template for hints on DQ applicability.<br />
DQ is simply a software development project. The most obvious model of software development is the waterfall model but any other kind of model is possible.<br />
The DQ should contain the classical documents and records found in a software development project:</p>
<ul>
<li>Development plan,</li>
<li>Software Requirements Specifications,</li>
<li>Design review.</li>
<li>Software Test Plan,</li>
<li>Software Test Report,</li>
<li>Final review.</li>
</ul>
<p>You may use the "all-in-one template" in the <a href="https://blog.cm-dm.com/pages/Software-Development-Process-templates">templates repository page</a> to document the development projet of a software tool.<br /></p>
<h5>DQ is not IQ / OQ / PQ</h5>
<p>Don't miss the point about DQ. It's a phase which is different from IQ / OQ / PQ.<br />
To make things simple, DQ is made on a test platform or pilot platform, IQ/OQ/PQ are made on the target platform.<br />
There may be cases where the test platform is also the target platform. But, to make things clear and catch the concepts, remember <em>DQ equals test platform</em> and <em>IQ / OQ / PQ equals target platform</em>.
<img src="https://blog.cm-dm.com/public/Templates/CSV/.DQ-IQ-OQ-PQ_m.png" alt="DQ-IQ-OQ-PQ.png" style="display:block; margin:0 auto;" title="DQ-IQ-OQ-PQ.png, Jul 2015" />
Using the language of software development teams, the version output of DQ is like a Release Candidate version ready to be tested by other people than the software development team.</p>
<h5>Installation Qualification</h5>
<p>Installation qualification is the verification of the installation of software on its target platform. The IQ can be made either during the installation or after the installation.<br />
<br />
When it is done during the installation, the tester runs the installation and verifies at the same time that the installation is running well. The IQ is then a mix of installation tests (eg: running the installer) and of inspections (eg: checking the hardware version, the OS version, the documentation...)<br />
<br />
When it is done after the installation, the verification is an inspection of the installation records. The tester goes through all installation records and checks that the installation was correct.<br />
<br />
Note that the IQ happens on the target platform. It shouldn't be confused with the installation of software on a test platform during DQ. Verifying that software can be installed and run on the test platform is a part of Design Qualification or of preliminary tests before the IQ.<br />
<br />
Using the language of software development teams, the version installed in IQ is the Release Candidate.</p>
<h5>Operations Qualification</h5>
<p>Operations Qualification is the verification of software functions on its target platform. The OQ is made after the IQ (I can't verify software if it wasn't properly installed before).<br />
OQ is a set of tests verifying the functional requirements of software. The functional requirements can be either user requirements or technical requirements. These requirement are input data of the validation process.<br />
<br />
When OQ is preceded by DQ, DQ tests and OQ tests may overlap. The most simple solution is to redo all the tests passed during DQ. OQ test can also be a reduced set of DQ tests - like typical user scenarios. OQ tests can also be completely different tests if DQ was oriented to verification of technical requirements.<br />
<br />
When OQ is not preceded by DQ, a test protocol verifying the requirements shall be written.<br />
<br />
Using the language of software development teams, the version output of OQ is like RC2 or RC3, where most of bugs found by users have been removed.</p>
<h5>Performance Qualification</h5>
<p>Performance Qualification is the verification of software in routine use. The PQ is made after the OQ (I can't verify in routine use if software functions haven't been properly tested before).<br />
<br />
PQ can be a set of structured tests verifying user scenarios. It can also be made of free tests by end-users. The PQ should contain a predefined period of surveillance of software used in routine by the end-users.<br />
<br />
Using the language of software development teams, the version output of PQ is the Final Release of software.</p>
<h4>Latitude in DQ / IQ / OQ / PQ content</h4>
<p>These four steps are heavy to implement, but we have escape plans.<br />
<br />
The VMP gives latitude to the validation team in the exclusion of the validation steps and in their content. Provided that rationale and evidence are brought, it is possible to make the validation more simple than these four steps.<br />
DQ is obviously not required for purchased software with minimal configuration settings. It's possible to simplify IQ, OQ and PQ steps when the context allows it. Likewise it's possible to exclude IQ or OQ with justification. It looks difficult to exclude PQ. But it may be possible to have OQ and PQ merged in a single step.</p>
<h4>Retrospective validation</h4>
<p>With legacy system, it's possible to do a retrospective validation. This is another kind of escape plan.<br />
It is based on the analysis of historical data of a system already used in routine. The retrospective validation consists in assessing the conformity of the system to regulations by analyzing:</p>
<ul>
<li>Records output by the system,</li>
<li>Non-conformities linked to the system or to processes involving the system,</li>
<li>Customer complaints linked to the system or to processes involving the system,</li>
<li>Any other relevant data (argh, can't be more precise ...).</li>
</ul>
<p>Be careful with retrospective validation hence it is not "appreciated" by auditors and inspectors. They're going to search for the pitfall in this kind of validation.<br />
The easiest pitfall to find is if you modify the system after you've validated it retrospectively. How to convince the auditor that a complete revalidation is not necessary? A tiny software change can have dire side effects.<br />
<br />
Anyway, retrospective validation is sometimes the only way to validate a legacy system that has been used for a long time without any bugs, and without any will of the users to modify it.<br />
<br />
<br />
<br />
Next time, we'll see the <a href="https://blog.cm-dm.com/post/2015/08/28/Validation-of-software-used-in-production-and-QMS-Part-3-Validation-Protocol-and-Reports">Validation Protocol and Validation Reports</a>.</p>https://blog.cm-dm.com/post/2015/07/24/Validation-of-software-used-in-production-and-QMS-Part-2-Validation-Master-Plan#comment-formhttps://blog.cm-dm.com/feed/atom/comments/169Content of DHF, DMR and DHR for medical device software - Part 1 DHFurn:md5:1c94726ddc83e477bf5f85fdeb88a97d2014-10-03T13:58:00+02:002014-10-30T13:25:20+01:00MitchRegulationsdevelopment processFDAGuidanceIEC 62304Software ValidationSoftware Verification<p>After a temporary absence, I'm back on the waves with a new series of articles to talk about the files required by the 21 CFR 820 regulations:</p>
<ul>
<li>DHF: Design History File,</li>
<li>DMR: Device Master Record,</li>
<li>DHR: Device History Record.</li>
</ul>
<p>Let's begin with the DHF.</p> <h4>What is the Design History File (DHF)?</h4>
<p>The DHF is a term defined by the US regulations. You can find it in the online copy of 21 CFR on the FDA website.</p>
<h5>Definition</h5>
<p>The <a href="http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=820.3">section 21 CFR 820.3(e)</a>, gives the definition of DHF:<br /></p>
<ul>
<li><em>Design history file (DHF) means a compilation of records which describes the design history of a finished device.</em></li>
</ul>
<p><br />
Okay, the DHF applies to a finished device, not to a prototype or to a device still in the design phase.</p>
<h5>Design Controls</h5>
<p>The <a href="http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=820.30">section 21 CFR 820.30</a> about Design Controls states the requirements about the DHF:<br /></p>
<ul>
<li>21 CFR 820.30 (a): <em>General</em>
<ul>
<li><em>(1) Each manufacturer of any class III or class II device, and the class I devices listed in paragraph (a)(2) of this section, <strong>shall establish and maintain procedures to control the design of the device</strong> in order to ensure that specified design requirements are met.</em></li>
<li><em>(2) The following class I devices are subject to design controls:</em>
<ul>
<li><em>(i) Devices automated with computer software</em></li>
<li>(...)</li>
</ul></li>
</ul></li>
</ul>
<p>In other words, you shall maintain a DHF, whichever the class of your medical device software (even in class I).</p>
<ul>
<li>21 CFR 820.30 (j): <em>Each manufacturer shall establish and maintain a DHF for each type of device. The DHF shall contain or reference the records necessary to demonstrate that the design was developed in accordance with the approved design plan and the requirements of this part.</em><br /></li>
</ul>
<p>In other words, the first sentence of this last section requires to: establish and maintain a DHF for each type of device. For standalone software, a "type of device" may be a software alone or a system made of software working together (eg a client and a server) or a software suite (with a well established list of software in the suite!).<br />
<br />
The second sentence requires to gather all the records necessary to prove that the design controls requirements of 21 CFR 820.30 are met. These design controls requirements are described in the sections (b) to (i) of 21 CFR 820.30. They list the mandatory steps of a design process: <em>Design and Development Planning, Design Input, Design Output, Design Review, Design Verification, Design Validation, Design Transfer</em>.<br />
<br />
Thus the DHF contains the records, which prove that these mandatory steps were actually followed.</p>
<h4>Design Controls for software</h4>
<p>The "translation" of these steps for software design can be found in two places: in the <a href="http://www.fda.gov/MedicalDevices/deviceregulationandguidance/guidancedocuments/ucm085281.htm">"General Principles of Software Validation" (GPSV) FDA guidance</a>, or in the IEC 62304 FDA recognized standard.<br />
We can draw the links between GPSV and IEC 62304 chapters, and designs steps listed in 21 CFR 830.30.<br />
<br />
Note: these links are relevant for standalone software. When software is embedded in a hardware device, IEC 62304 requirements are not enough. Eg: Design input contains also requirement management at system level.</p>
<h5>Design and Development Planning</h5>
<p>GPSV <em>5.2.1. Quality Planning</em>, and IEC 62304 <em>5.1 Software development planning</em> require to define procedures and/or plans to follow during design.</p>
<h5>Design Input</h5>
<p>GPSV <em>5.2.2 Requirements</em>, and IEC 62304 <em>5.2 Software requirements analysis</em>.<br />
For both documents, this is the input data of software requirement definition (the high-level requirements, the use-cases, the user requirement not formalized),</p>
<h5>Design Output</h5>
<p>GPSV <em>5.2.2 Requirements</em>, and IEC 62304 <em>5.2 Software requirements analysis</em>. But now, this is the output data of software requirement definition (the actual software requirements written in a formal way, used to design software).<br />
For GPSV, we have also <em>5.2.3. Design, 5.2.4. Construction or Coding</em>, and for IEC 62304 we have <em>5.3 Software Architectural Design, 5.4 Software Detailed Design, 5.5 Software Unit implementation and verification, 5.6 Software integration and integration testing</em> (for the integration part).</p>
<h5>Design Review</h5>
<p>GPSV <em>3.5 Design review</em> section gives basic principles of design reviews for software, and requires design reviews in <em>5.2.2 Requirements</em> and <em>5.2.3. Design</em>.<br />
IEC 62304 doesn't use the term <em>design review</em> but uses in place the word <em>verify</em>. Sections <em>5.2.6 Verify software requirements</em>, <em>5.3.6 Verify software architecture</em>, and <em>5.4.4 Verify detailed design</em> are good landmarks to define the content of a design review.<br />
There may be one or more design reviews, depending on the complexity and the length of the software development. But there shall be at least one design review, containing the review of the above elements in its agenda.</p>
<h5>Design Verification</h5>
<p>Verification is testing software. Provisions for tests are described in GPSV <em>5.2.5. Testing by the Software Developer</em>, <em>5.2.6. User Site Testing</em> (when user takes part to verification tests), and in IEC 62304 <em>5.6 Software integration and integration testing</em> (for the testing part), and <em>5.7 Software System testing</em>.</p>
<h5>Design Validation</h5>
<p>21 CFR 820.3 Definition of design validation is <em>establishing by objective evidence that device specifications conform with user needs and intended use</em>.<br />
The recommendations of GPSV document about validation are in fact the whole section 5.2.<br />
Narrowing the scope to what happens after verification, validation is <em>5.2.6 User Site Testing</em> (when user takes part to validation tests).<br />
Here for IEC 62304 we don't have any relevant section, hence this standard ends at the verification phase. But for standalone software, requirements described in <em>5.7 Software System testing</em> are quite convenient to formalize and record a user validation step.<br />
Another way of appraising validation is to apply the recommendations of GPSV and from A to Z.</p>
<h5>Design Transfer</h5>
<p>In GPSV, we don't have a specific chapter. Hence it stops the list of recommendations at the end of validation.<br />
In IEC 62304, section <em>5.8 Software release</em> contains some requirements about design transfert. Design transfert can be seen as freezing a software configuration and its associated documentation (design docs, release notes ...). But design transfer is more than that because it aims to prepare the Device Master Record.<br />
We'll see that in the next article of this series.</p>
<h4>Content of the DHF for software</h4>
<p>Based on the steps of design controls listed above, we can list the kind of documents that we need to prove that these steps were followed:</p>
<ul>
<li>Planning: A design procedure and/or plans for design, i.e. project management plan, software development plan, software configuration management plan, and risk management plan,</li>
<li>Input: any input document relevant for software design (algorithms, scientific articles, user requirements, prototypes, mock-ups...), including preliminary risk assessment report and preliminary usability specifications,</li>
<li>Output: software requirements specifications, usability specification, risk assessment report, software architecture design, software detailed design, ... and the code (better in a software configuration management tool),</li>
<li>Review: review meeting reports (be it architectural, detailed design, code reviews, integration review or else), according to plans,</li>
<li>Verification: software test plan, software test description, software test reports, for each test phase,</li>
<li>Validation: software test plan, software test description, software test reports, for the validation test phase, if there is one, plus validation meeting review reports.</li>
<li>Transfer: version delivery description, release notes... (see next article)</li>
</ul>
<p>You may find some documents templates on <a href="https://blog.cm-dm.com/pages/Software-Development-Process-templates">the templates repository for software development process page</a>.</p>
<h4>Evolution of the DHF with multiple software versions</h4>
<p>Software versions are never frozen for long. Sooner or later, a new incremental version, a brand new version, or patches are released.<br />
All these patches or evolutions of software have to be recorded in the DHF. To do so, the way they are released need to be planned, and they need to be documented.</p>
<h5>Software maintenance plan</h5>
<p>GPSV has the section <em>5.2.7. Maintenance and Software Changes</em>, and IEC 62304 has the chapter <em>6 Software maintenance process</em>. Both require to establish a maintenance process, to analyse user problems or requests, and to implement them using the development process (under the change control provisions of the quality management system).<br />
The DHF may contain a software maintenance procedure and/or a software maintenance plan, to keep track of how design changes were planned.</p>
<h5>Software maintenance documentation</h5>
<p>Software maintenance activities are actually design activities for evolutions, and design flaws fixes for bugs. The documentation to add to the DHF is either the updates of all design documents listed in the design phases (especially updates of the risk assessment report), or documents describing patches (if bug fixes didn't change the design and the list of tests).<br />
As its name suggests, the Design History file contains the history of design! Thus it shall contain all versions of software documents created for each software version, including patches and minor versions.
If you use a bug tracking tool, its content can be seen as a piece of the DHF. Freezing or exporting the content of the bug tracking tool is a good way to keep track of the status of bugs for each released version or patch.<br /></p>
<h4>To sum-up the DHF content</h4>
<p>The DHF contains the set of all documents and records related to software design, for every software version (major or minor) and for every patch.<br />
Procedures and plans followed during design and maintenance can also be seen as a part of the DHF, hence they contain the description of the processes followed at the time software was designed.<br />
The content of software development tools, mainly configuration management and bug tracking can also be seen as a important piece of the DHF. Thus they contain the history of source files and the history of bugs.<br />
<br />
<a href="https://blog.cm-dm.com/post/2014/10/17/Content-of-DHF%2C-DMR-and-DHR-for-medical-device-software-Part-2-DMR">Next time</a> we'll see the Device Master Record and how to build it with Design Transfer.</p>https://blog.cm-dm.com/post/2014/10/03/Content-of-DHF%2C-DMR-and-DHR-for-medical-device-software-Part-1-DHF#comment-formhttps://blog.cm-dm.com/feed/atom/comments/153ISO/DIS 13485:2014 strengthens requirements about software - Part 2urn:md5:dee6b7d3732ff388e225a6d5ed84282d2014-06-27T11:10:00+02:002014-06-29T22:06:20+02:00MitchStandardsFDAISO 13485ISO 14971software failureSoftware Validation<p>Continuing with ISO/DIS 13485:2014, after having made an overview of software-related changes <a href="https://blog.cm-dm.com/post/2014/06/13/ISO/DIS-13485%3A2014-strengthens-requirements-about-software-Part-1">in the last article</a>, let's focus on the new clause #4.1.6.</p> <h4>New clause 4.1.6</h4>
<p>The clause says:<br />
<em>The organization shall document procedures for the validation of the application of computer software used in the quality management system, including production and service provision.</em><br />
<br />
That's brand new and could require a lot of man-hours in companies where the QMS relies on computerized system and produces lots of electronic documents and records.<br />
That's however not so new for companies, which already implement 21.CFR regulation (see below).<br />
<br />
The clause 4.1.6 continues with:<br />
<em>Such software applications shall be validated for their intended use prior to initial use, and after any changes to such software and/or its application. Records of such activities shall be maintained.</em><br />
<br />
OK, that's logical. If we want to validate software, we need to validate them according to established criteria (the topmost one is the intended use), we need to revalidate when something has changed, and we need to record the validation results to prove that it was done.<br />
<br />
The clause 4.1.6 ends with:<br />
<em>For each application of computer software used in the quality management system, the organization shall determine and justify the specific approach and the level of effort to be applied for software validation activities based on the risk associated with the use of the software</em><br />
<br />
Pfew! We can choose our own approach and fine-tune the level of effort demanded by validation! But it shall be done according to the results of risk assessment.<br />
<br />
My two comments (my two cents):</p>
<h4>Comment 1: what kind of risk</h4>
<p>Validation is based on risk assessment, high risk = heavy validation, low risk = light validation.<br />
But what kind of risk assessment?<br />
In the definitions section, we find the definition of risk and risk management, which both refer to ISO 14971. We can assume that the required risk management method is ISO 14971. However, there is no reference to ISO 14971 in clause 4.1.6, contrary to some other clauses dealing with risks elsewhere in the standard.<br />
And what kind of risks should be assessed? Probably those, for which the root cause is a QMS software failure.<br />
Knowing by experience how people don't feel at ease with software-related risks, I bet this risk assessment is going to burn a lot of man-hours in quality departments!</p>
<h4>Comment 2: the least burdensome approach</h4>
<p>I like this expression: <em>least burdensome approach</em>, because this is exactly what everybody is going to do. Translated in pragmatic words: everybody is going to do as little as possible to get through the validation.<br />
This is the corollary of comment 1, if something is hard to achieve, I'd rather try not doing it.
For example:</p>
<ul>
<li>A manufacturer which uses simple excel sheets (no formulas) to record CAPA could argue that there is very low risk of software failure, and as a consequence won't validate the sheets,</li>
<li>Another, which uses an access database since 10 years without problems could argue that there is no need to validate something with a long history of use,</li>
<li>A third one, which bought a license of a QMS management software could argue that the validation was done by the supplier management process.</li>
</ul>
<p>These three examples could be adequate in some cases, and not adequate in others.</p>
<h4>And 21 CFR?</h4>
<p>To some extent, the new 4.1.6 clause is made to have ISO 13485 more in line with requirements of US 21.CFR regulation. More precisely, 21.CFR.870 (i):<br />
<em>When computers or automated data processing systems are used as part of production <strong>or the quality system</strong>, the manufacturer shall validate computer software for its intended use according to an established protocol.</em><br />
I put in bold <em>or the quality system</em>, hence software used in the production processes are already addressed by clause 7.5.2 of ISO 13485:2003. So software in quality systems, addressed by 21.CFR.870, are covered by the new clause 4.1.6.<br />
<br />
Therefore manufacturers, which already apply 21.CFR regulations, won't be surprised by clause 4.1.6.<br /></p>
<h4>Conclusion</h4>
<p>A lot of work is required to bring an existing and well automated QMS in line with the new clause 4.1.6. But if it's done in the frame of a regulatory strategy, it's worth the effort.<br />
Thus it makes the QMS more ready to the changes in regulations (for example expected in Europe) and more in line with 21.CFR requirements about software validation.</p>https://blog.cm-dm.com/post/2014/06/11/ISO/DIS-13485%3A2014-strengthens-requirements-about-software-Part-2#comment-formhttps://blog.cm-dm.com/feed/atom/comments/147Validation of compiler and IDE - Why, when and how to? - Part 1urn:md5:6a2b7650f462a41be688fdf8e0edf0f52014-03-14T13:26:00+01:002016-06-07T16:24:36+02:00MitchProcessescritical softwaredevelopment processFDAIEC 62304risk managementSoftware ValidationSoftware VerificationSOUP<p>Validating the compiler used in software development is a recurring issue. To what extent a compiler should be validated, when, how and why?<br />
In the same vein, we can extend the question of validation to all tools used in the software development environment: integrated development environment, configuration management tools, compiler (and linker), automated test tools.</p> <p><br />
<em>Edit June 2016: this article remains relevant with the new requirements on software validation found in ISO 13485:2016.</em><br />
<br /></p>
<h4>Class of medical device</h4>
<p>If you're in class III FDA or in class III CE mark or in Class C IEC 62304, you have to do it thoroughly if a flaw in a development tool represents an unacceptable risk!<br />
<br />
If you're in class II FDA or in class IIa or IIb CE mark or in class B IEC 62304, you may do it but it's far from being mandatory!<br />
<br />
If you're in class I FDA or in Class I (even class IIa) CE mark or in class A IEC 62304, do it if you have spare time!<br />
<br />
In other words, thorough development tool validation, and especially compiler validation, are only relevant for very, very, very, critical software.<br />
<br />
<em>Edit June 2016: following the risk-based approach found in ISO 13485:2016, this rationale remains relevant.</em>
<br />
Perhaps it makes sense for a small subset of embedded sw used in class III MD, like pacemakers. Likewise it makes senses in automotive or airborne systems where sw failure equals dozens of casualties.</p>
<h5>Development tools are low risk</h5>
<p>The main rationale not to validate development tools is to consider them as low-risk software. Hence if there is a bug in one of these tools, then the software built will be buggey and odds are pretty good that this bug will be discovered during software tests (be it unit, integration, or functional).<br /></p>
<h4>Examples</h4>
<p>Here are some examples of bugs in development tools:</p>
<ul>
<li>My IDE has a bug in the code editor and doesn't save source files in specific conditions. I'm going to see it quickly! Or I'm going to see it in the code of a colleague during a code review.</li>
<li>My source control tool has a bug in the graphical merge function. I'm going to see it quickly as well!</li>
<li>My compiler doesn't cast the right way a floating point value to an integer value, under certain circumstances. I'm not going to see it quickly. But I'm probably going to see it during tests, with inconsistent computed values.</li>
</ul>
<p>All in all, tests in the software development process are here to find problems created in early stages of the process. Most of these problems are created by humans (we can't think of everything), and some are created by the tools we use (the guys who created the tools couldn't think of everything).</p>
<h5>Process risk assessment</h5>
<p>What is shown above is assessing the risks of the software development process.<br />
In class I, there is no use to validate thoroughly development tools hence the hazardous situations created by these tools have low severity (the final software is class A of IEC 62304) or have low probability (bugs created by these tools are fixed during code reviews or tests).<br />
In class II or III, it's useful to validate these tools hence the hazardous situations created by these tools (namely bug in built software) have high severity, or high probability (think of the <a href="https://blog.cm-dm.com/post/2012/09/14/Probability-of-occurence-of-a-software-failure">100% probability in software hazards</a>).</p>
<h4>How to validate these tools</h4>
<p>If you have to validate these tools, you may take examples from this guidance: AAMI TIR 36 Validation of software for regulated processes. It has pretty good examples (excepted compilers, see below).<br />
You may also get your inspiration from GAMP5 about computerized systems (pull out your credit card if you want to read it!).<br />
<br />
If you don't want to buy any of these documents, there are plenty of examples available on the internet. You just have to seek for IQ/OQ/PQ plans and reports.<br />
Basically the goal of a validation plan is a bit like applying the software development process of IEC 62304 with SOUP only:</p>
<ul>
<li>Assessing risks of the software development process ,</li>
<li>Writing requirements of the ideal development tool (including requirements mitigating risks and requirements about the tool vendor),</li>
<li>No architecture or detailed design, but in place selecting the right tool for your needs (don't forget interoperability with other tools),</li>
<li>Tests with three levels:
<ul>
<li>Installation Qualification (IQ), i.e. ensuring that it is deployed and well configured on the development or integration platform, and verifying that all necessary documentation is available,</li>
<li>Operational Qualification (OQ), i.e. verifying that it works and integrates well with other development tools, according to written and approved requirements (including requirements mitigating risks),</li>
<li>Performance Qualification (PQ), i.e. using the development tools in real conditions for a period of time to ensure that the tool and its vendor behave according to expected performances.</li>
</ul></li>
</ul>
<p><br />
If you are in a case where there's no urge to validate software development tools, then just write a document with the rationales that led you to choose these tools.<br />
In all cases, however, it's necessary to have a maintenance plan of the development tools, like what you have about SOUPs in IEC 62304:</p>
<ul>
<li>Monitoring published bugs, bugs fixes and new versions,</li>
<li>Assessing risks related to these bugs,</li>
<li>Deciding whether it's necessary to install a new version of the development tool.</li>
</ul>
<h4>Development vs production processes</h4>
<p>There are dozens of articles or memos or documents about validation of tools used in production processes.<br />
The validation method described above is a bit peculiar because it deals with tools used in software development processes (for software design), not production processes (for production of physical goods or for delivery of standardized services).<br />
Thus it is acceptable only for development processes. For production processes, like automated machines or test benches, this validation plan is too simple.<br />
<br />
However,if you want to validate a compiler, this validation plan is a bit incomplete.<br />
Validating a compiler once and for all is a titanic task! We'll see it<a href="https://blog.cm-dm.com/post/2014/03/28/validation-of-compiler-and-IDE-Why%2C-when-and-how-to-Part-2"> in the next article</a>.</p>https://blog.cm-dm.com/post/2014/03/13/Validation-of-compiler-and-IDE-Why%2C-when-and-how-to-Part-1#comment-formhttps://blog.cm-dm.com/feed/atom/comments/140Want to post a medical App on Apple Store? Things get harder!urn:md5:27dd13ab67b54338f9f5c18cf382f5732013-09-27T10:36:00+02:002013-09-27T10:36:00+02:00MitchMiscmobile medical appSoftware Validation<p><q>Apple now asking app developers to provide sources of medical information</q> says <a href="http://www.imedicalapps.com/2013/09/apple-app-developers-sources-medical-information/">imedicalapps.com web site</a>.</p> <p>Looks like Apple wants to put the pressure on medical app developers. According to this article, the review is extended to a kind of validation of the apps. Apple wants now to validate source of the medical information contained in the app<br />
<br />
This new validation process somewhat overlaps the validation requested by the FDA (21 CFR 820.3) or by the CE mark process (Essential Requirement #6bis) or any other national regulation.<br />
Despite Apple's legitimate will to discard ineffective apps, it raises more questions that it solves!</p>https://blog.cm-dm.com/post/2013/09/23/Want-to-post-a-medical-App-on-Apple-Store-Things-get-harder%21#comment-formhttps://blog.cm-dm.com/feed/atom/comments/133How to bring legacy software into line with IEC 62304? - part 3urn:md5:a8647fa4bac9585247c2ae6a8f8d27762013-03-08T14:09:00+01:002013-03-08T14:09:00+01:00MitchStandardsFDAGuidanceIEC 62304Software Validation<p>We've seen in the <a href="https://blog.cm-dm.com/post/2013/02/06/How-to-bring-legacy-software-into-line-with-IEC-62304-part1">two previous posts</a> several solutions on how to treat legacy software according to IEC 62304.<br />
But there is nothing equivalent to this discussion in IEC 62304. The standard is silent about these situations.</p> <h4>IEC 62304 2nd Edition</h4>
<p>The next edition of IEC 62304 is supposed to have a new section about legacy software. Unfortunately, this new revision is still in draft and should be available only by 2014.<br />
If we want to find further information about legacy software, we have to look at standards and guidances outside the field of medical devices.<br />
We don't have plenty of possibilities, the most obvious is to see how things are supposed to happen according to the good manufacturing practices in the pharmaceutical industry (the production infrastrucure that contains software).<br /></p>
<h4>GAMP about Software</h4>
<p>The <a href="http://en.wikipedia.org/wiki/Good_Automated_Manufacturing_Practice">ISPE GAMP</a> define the good manufacturing practices of the pharmaceutical industry. The GAMP is a set of guidances, some of them being about IT systems and laboratory computerized systems.<br />
Like FDA guidances, ISPE GAMP has sections that focus on validation of software.<br />
But this was not enough and the ISPE published in Pharmaceutical Engineering Vol 23, No 6
November / December 2003 a <strong>Good Practice Guide: The Validation of Legacy Systems</strong>. This document is charged by the ISPE but it's worth it.<br />
In a few words, it contains the good practices about validation of legacy systems, from the most simple case (reuse) to the most difficult case where software has to be reverse engineered to align its design with the good practices.<br /></p>
<h4>Retrospective Validation</h4>
<p>Another source of information is the concept of Retrospective Validation defined is this <a href="http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/PostmarketRequirements/QualitySystemsRegulations/MedicalDeviceQualitySystemsManual/ucm122439.htm">FDA Document</a>.<br />
Retrospective validation applies to anything that is legacy process. It is interresting to have a look at how the FDA expects thing to be. It focusses on the availability of historical data and on their characteristics.<br />
IMHO, having legacy software data like those requested by the FDA is a good start. In case legacy software can be treated as a SOUP, the risk analysis is easier with such data.<br /></p>
<h4>Futher information</h4>
<p>Besides the two sources of information I quoted above, the cupboard is getting bare. There is not much about legacy software in official guidances. There is possibly more to eat outside the medical industry, like in airborne or automotive. But that overpasses the field of this blog.<br />
If you have some more information I would be glad that you share it!<br /></p>https://blog.cm-dm.com/post/2013/02/27/How-to-bring-legacy-software-into-line-with-IEC-62304-part-3#comment-formhttps://blog.cm-dm.com/feed/atom/comments/86En route to Software Verification: one goal, many methods - part 3urn:md5:afdc902ef62c854c4943df8d075da5f22012-12-14T12:35:00+01:002012-12-14T12:35:00+01:00MitchProcessesSoftware ValidationSoftware Verification<p>In <a href="https://blog.cm-dm.com/post/2012/12/07/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-2">my last post</a>, I explained the benefits of static analysis. This software verification method is mainly relevant to find bugs in mission critical software. But it fits the need of bug-free software for less critical software as well.<br />
Static analysis can be seen as an achievement in the implementation of software verification methods. Yet, other methods exists that fit very specific purposes.</p> <h4>Heavy Metal</h4>
<p>In reference to my previous posts, I place the tests methods here in the <em>Heavy Metal</em> category!<br />
Perhaps some may say these methods aren't so heavy metal. It depends on your experience with these methods, of course.</p>
<h5>Automated GUI testing</h5>
<p>Automated Graphical User Interface testing sounds like something very simple, but it is not. It requires a GUI testing engine and a lot of patience of people who feed the engine with tests.<br />
The main impact of GUI testing is on the composition of the test team. The GUI tests will certainly be assigned to a (poor) guy, who will work 100% on this task. A good option is to let him/her do other types of testing and integration!<br />
I personally haven't been satisfied yet by any GUI testing tool I've found either open-source, or commercial. If you can quote me one that you're happy with, you're welcome!</p>
<h5>Performance testing</h5>
<p>Testing performance is less specific than testing GUI because it involves the whole architecture of a product, not only its GUI. As a consequence it involves the whole testing team (this is my experience, maybe not yours).<br />
Here again, a lot of tools exist to do performance tests. Most of them are focused on Web apps or databases with concerns about load balancing and load increase.<br />
This is probably not relevant for embarked software. Tailor-made testing programs are better to test a specific issue on such software.</p>
<h5>Security testing</h5>
<p>Static analysis tools can find security holes in your code, like buffer overrun. Some other security tests are possibly necessary for your devices. Perhaps it's a good idea to ask consultants in IT security to find security holes in your devices.<br />
Embarked software with wireless connection (even used very occasionally, like only in maintenance) fall into the scope of this kind of tests.</p>
<h5>Statistical methods</h5>
<p>Statistical methods are made to test complex algorithms. It is not possible to test all combinations of input values in complex algorithms. One way to increase the level of confidence in an algorithm is to use such methods.<br />
Most of algorithms are based on physics/maths laws and can be tested to ensure that the law is verified by the algorithm.<br />
Statistical tests call for techniques like Monte Carlo simulations or khi 2 tests, to name a few. Although I don't have experience in statistics, I had once to use a Monte Carlo simulation engine. It was a commercial add-on to excel, which was really nice to use.</p>
<h4>The Big Picture</h4>
<p>To finish this series of posts, here is a diagram, which contains the position of the different verification methods we've seen. I tried to place them according to their complexity and the type of bugs found.<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-08_m.jpg" alt="Software Medical Devices - Position of different verification methods vs their complexity and the scope of bugs found" style="display:block; margin:0 auto;" title="Software Medical Devices - Position of different verification methods vs their complexity and the scope of bugs found, Dec 2012" /><br />
You may certainly change the size and the position of the ellipses, given your experience and the kind of medical devices your work on. And I don't include clinical trials in end-user tests done for software verification, as discussed in <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-validation">this article.</a><br />
<br />
The most important point on this diagram is the projection of the ellipses on the x-axis. The union of the projections of all ellipses shall cover all the x-axis. i.e. all types of bugs shall be sought by these methods:</p>
<ul>
<li>High-level: uses-cases and architecture,</li>
<li>Mid-level: algorithms and components</li>
<li>Low-level: language pitfalls, coding rules and software units.</li>
</ul>
<p>We have a zone at both extremities of the axis (grey-shadowed) where only one method is able to find bugs efficiently:</p>
<ul>
<li>For very high-level bugs, only end-users can find them,</li>
<li>For very low-level bugs, only static analysis can find them.</li>
</ul>
<p>When these kind of bugs are found on the field, after the software validation:</p>
<ul>
<li>Very high-level bugs are discrepancies between the result given by software and the result expected by the user, that require medical knowledge to be found and analyzed,</li>
<li>Very low-level bug are any kind of error in code, that can lead to an erratic and non-reproductible behavior or simply a crash.</li>
</ul>
<p>Only a fully controlled software development process is able to whip out all bugs including those at the extremities of the diagram.</p>
<h4>He forgot code reviews!</h4>
<p>Some may say that I didn't mention code reviews by peers or code inspections (like Fagan analysis, as mentioned in a comment of <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-verification">this post</a>). This is a kind of verification method, but no <em>alive</em> software is involved:</p>
<ul>
<li>neither the software is running,</li>
<li>nor a test tool is running,</li>
<li>only developers can do it.</li>
</ul>
<p>People only need a sheet of paper or a text editor to do code reviews. Code reviews are made by developers, not testers. That's why I didn't put them in the scope of this series.<br />
One could argue that unit tests are made by developers. Yes, it's true. But units tests may be run by a tester or a build manager, without the help of any developer.<br />
That's why I don't include code reviews in software verification, as well as requirements reviews or achitecture reviews, for example.</p>
<h4>Conclusion</h4>
<p>I tried to make an overview of software verification methods that exist and how complementary they are. There are probably other methods that are used by software companies or computer science labs. But I think that I have covered 90%-95% of all methods. If you know some others, feel free to quote them in comments!<br /></p>https://blog.cm-dm.com/post/2012/12/13/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-3#comment-formhttps://blog.cm-dm.com/feed/atom/comments/75En route to Software Verification: one goal, many methods - part 2urn:md5:04454fb9e3c32586ebca4b457b92e0812012-12-07T12:45:00+01:002012-12-17T12:02:23+01:00MitchProcessescritical softwareSoftware ValidationSoftware Verification<p>In my <a href="https://blog.cm-dm.com/post/2012/11/30/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-1">last article</a>, I talked about the most classical methods used to verify software: human testing (driven by test cases or not) and unit tests. I was about to talk about static analysis, that I place at a higher level of complexity in the list of verification methods, but I have to say a bit more about unit tests.</p> <h4>Discussion about unit tests vs verification</h4>
<p>The main characteristic of units tests is that they change the way developers work:</p>
<ul>
<li>Good unit tests are written before coding,</li>
<li>Less good unit tests are written after coding.</li>
</ul>
<p>I can't write that unit tests written after coding are bad. It's always good to write unit tests. Just that they are less good if written a posteriori or ... during a phase of reverse-engineering (yes, it happens. Don't blame teams who work like that. You never know...).<br />
However, there is a case where unit tests shall be written a posteriori. It's when a bug is found and fixed. The unit test is written along with the bug fix to maximize the odds of avoiding regressions in future versions.<br /></p>
<h5>Units tests = coding</h5>
<p>A good implementation of unit tests requires changing the way developers design and code. First you think about the function, then you write the test, and finally you write the code.<br />
So this is not so obvious to do unit tests the canonical way. Developers need training to be efficient at writing unit tests, and -most of all- should manifest their willingness to do so.<br />
A new agile development method was even created: the <a href="http://en.wikipedia.org/wiki/Test-driven_development">Test Driven Development</a>. It makes an intensive use of very early unit tests combined with agile development in short loops.</p>
<h5>Unit tests = verifying</h5>
<p>Unit tests are a tool to verify that software runs the way it was designed. So they are definitely a part of methods used during verification.<br />
But they happen very early in the development process, before the "true" verification phase. If they are not well implemented during coding phases, it's extremely difficult, time consuming and expensive to write them during the verification phase.<br />
Classical software tests cases may be modified or completed during the verification phase, if they were not well prepared enough. But this is hardly possible with unit tests, because during verification developers spend all their time to fix bugs.<br />
They don't have time to add more unit tests on components where bugs don't show-up. It's too late.</p>
<h5>Unit tests rock</h5>
<p>Unit tests are a very powerful tool. But writing unit test and coding are intricated (should I write intertwined?) activities. On top of consequences, they need some changes in the habits of the software development team.<br />
That's why unit tests are not jazzy but definitely rock.</p>
<h4>Rock</h4>
<p>Let's continue with static analysis, another very powerful tool. Contrary to unit tests, there's little to do from the point of view of the developer to run static analysis. Most tools are run at build time and the main effort is to interpret the report generated by the tool.</p>
<h5>Checking the code</h5>
<p>Static analysis can be seen as code checks and unit tests that other developers have thought about and done for you. The main advantage of static analysis is its ability to scan all the code to find issues and report them.<br />
It's the best way to whip out the most basic bugs found in C language like uninitialized memory, or null pointers and so on. Open-source static analysis engines can do these types of basic checks:</p>
<ul>
<li>Basic errors like those mentioned above,</li>
<li>Programming rules,</li>
<li>and also code qualimetry (dead code, length of methods ...).</li>
</ul>
<p>I say that these checks are basic, but writing the engines that do these checks is absolutely not a basic code! Since all are based on syntactic and semantic analysis, it's as complex as writing a compiler or an interpreter. That's why some people consider that these checks should be present in the compilers.<br />
Some compilers actually do it because the language specs already contain rules that make these checks mandatory, like ADA or C# or MISRA C.<br />
So the frontier between an advanced compiler and a static analysis engine is sometimes blurred.</p>
<h5>Avanced code inspection</h5>
<p>The most advanced static analysis engines can go far beyond these basic checks. For example finding conditions when:</p>
<ul>
<li>a division by zero or arithmetic overflow occurs,</li>
<li>a loop doesn't exit,</li>
<li>a database deadlock happens.</li>
</ul>
<p>There are as many possibilities as there are programs.<br />
With such tools, the advantage is to cover cases you haven't thought. The drawback is that some lazy people might lay on the tool to find bugs.<br />
Another drawback of such tools is that you can't rely on just one of them. Each tool has its own checks that another one hasn't. A lot of checks overlap but there are always grey areas.<br />
It's definitely true that using one is going to decrease the number of hidden bugs in code. But, theoretically, it would be better to use more than one, which probably is not a practically feasible solution.<br /></p>
<h5>Which tools</h5>
<p>A lot of free and open-source tools exist, with more or less efficiency. They all do basic checks and more or less advanced checks. Here is an partial and non exhaustive list of tools:</p>
<ul>
<li>For C/C++: <a href="http://www.dwheeler.com/flawfinder/">FlawFinder</a>, <a href="http://en.wikipedia.org/wiki/Cppcheck">CppCheck</a>, <a href="http://www.security-database.com/toolswatch/RATS-v2-3-Rough-Auditing-Tool-for.html">RATS</a>,</li>
<li>For Java: <a href="http://checkstyle.sourceforge.net">CheckStyle</a>, <a href="http://www.sonarsource.org">Sonar</a>, <a href="http://findbugs.sourceforge.net">FindBugs</a>.</li>
</ul>
<p>All commercial tools available on the market do both types of checks: basic ones and (more or less) advanced ones. I usually don't quote commercial software in my blog but I can make an exception for <a href="http://en.wikipedia.org/wiki/Polyspace">Polyspace</a>. It was created after the crash of Ariane 5 european rocket. The bug in the calculator responsible for the crash was known as "impossible to find with automated tests". Now it is, thanks to Polyspace!</p>
<h5>Static analysis and regulatory requirements</h5>
<p>It is definitely a good idea to implement static analysis for devices with mid to high-level of risks, namely class C according to IEC 62304 standard. And optionally for class B software. This is my opinion, there's no requirement in the standard that tells you to do static analysis.<br />
There is no regulatory requirement that make static analysis mandatory for mission critical software either. But it's better to have them in your software development process. As far as I know, there is no FDA guidance that quotes static analysis (except a small line in the guidance about <a href="http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm089543.htm">software in a 510k</a>).<br />
Researchers at FDA <a href="http://www.embedded.com/design/prototyping-and-development/4007539/Using-static-analysis-to-evaluate-software-in-medical-devices">published an article</a> a few years ago about the benefits of static analysis. This proves the interest of the FDA to static analysis for specific cases quoted in this article.<br />
On the side of CE Mark, I found no data about static analysis. So use this method based on your own assessment of your situation!<br />
<br />
<br />
I could say a lot more about these methods, like problems with false positives and false negatives, how to interpret static analysis logs. There are dozen of fantastic articles on the web. Static analysis is a vivid and exciting subject. (when I say exciting, it's true for you if you have kept alive the little geek inside you!).
<br />
<br />
There are even more complex software verification methods. I'll talk about those <a href="https://blog.cm-dm.com/post/2012/12/13/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-3">in the last article of this series</a> en route to software verification!</p>https://blog.cm-dm.com/post/2012/12/07/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-2#comment-formhttps://blog.cm-dm.com/feed/atom/comments/74En route to Software Verification: one goal, many methods - part 1urn:md5:3a0a1d106c1a59c50e6eea568a0dfd262012-11-30T12:06:00+01:002012-12-12T19:02:49+01:00MitchProcessesdevelopment processSoftware ValidationSoftware Verification<p>Software verification is easy to define: to demonstrate that software works as it was specified (and without bugs!). But there's not a unique way to do it.<br />
Let's see what methods we have in hands to verify software.</p> <h4>Classic</h4>
<p>Ask someone how to verify software and he/she will answer you to put a guy in front of the guilty machine with a dozen of testing procedures.<br />
This is actually the most basic way to test software. Doing this, you are sure that:</p>
<ul>
<li>not all cases will be covered (even with hundreds of testing procedures),</li>
<li>the time initially reserved to tests in the planning won't be enough,</li>
<li>the final software will be full of bugs.</li>
</ul>
<p>Yet, most of critical bugs should have been whipped out but some bugs could still be hidden somewhere. That's why it's necessary to use other methods to test software.<br />
<br />
The second most basic way to test software is to give it to a few selected end-users, after the first phase of tests described above.<br />
Doing this, you are sure that:</p>
<ul>
<li>not all cases will be covered (but less that before),</li>
<li>the time reserved to tests by the end-user won't be long (he/she is probably a physician and has lots of others things to do),</li>
<li>the final software will be full of bugs (but less than before).</li>
</ul>
<p>The second bullet point is false if you pay selected end-users to do tests, or if you're lucky to have a passionated end-user.<br />
Nevertheless there will still be remaining bugs in software, most probably:</p>
<ul>
<li>all critical bugs are whipped out,</li>
<li>some major bugs could still reside somewhere,</li>
<li>there are dozens of minor bugs.</li>
</ul>
<p>But it works so far. And you don't have enough time to fix anything else, so the device is placed on the market as is.<br />
This is true for devices with low level of risks, namely class A according to IEC 62304 standard or possibly low classes according to national regulations.<br />
Testing all possible cases is not possible within the timeframe and budget of most software development. There are always remaining bugs that are found when the device is already placed on the market. This is not a big deal as long as the remaining bugs don't impair the risk-benefit assessment of the device.</p>
<h4>Jazz</h4>
<p>Most of bugs are triggered by small errors of consistency in code. For example, not testing if an input parameter is inside a given range of values before starting computations. <a href="http://en.wikipedia.org/wiki/Unit_testing">Unit tests</a> is a method to ensure that these tiny inconsistencies are detected and fixed.<br />
When they became popular a few years ago, unit tests seemed to be a miraculous method to kill bugs in the egg (I personally was an aficionado on unit tests!).
But this method is not so miraculous and has its own pitfalls.<br />
By doing this, your are sure that:</p>
<ul>
<li>the time reserved to code unit tests doesn't fit into the planning,</li>
<li>some critical bugs and major bugs are whipped out.</li>
<li>not all cases of inconsistency are covered (but less than without unit test),</li>
</ul>
<p>The problem with unit tests is that the software developer is at the center of the process:</p>
<ul>
<li>he/she has to decide which tests to write,</li>
<li>he/she could write an erroneous unit test (do we need a unit test to test the unit test ???),</li>
<li>he/she hasn't got the time to write it, eventually.</li>
</ul>
<p>So, unit tests bring a higher level of confidence that the final software has less hidden bugs.<br />
Compared to classic methods, they whip out small inconsistencies that can lead to critical bugs by a chain of events in the algorithms running in software.<br />
They're really complementary to the classic methods. Classic methods are more prone to capture bugs in use cases or to capture bad behaviors compared to high level requirements. Units tests are more prone to capture bugs in workflows or in algorithms at a lower level.<br />
It is definitely a good idea to implement unit tests for devices with mid-level of risks, namely class B according to IEC 62304 standard!
<br /></p>
<h5>Note</h5>
<p>Some languages have their own mandatory units test inside the language specification, like the pioneer <a href="http://en.wikipedia.org/wiki/Eiffel_(language)">Eiffel</a> language. Eiffel requires design by contract. Basically, it requires that each input parameter and each output parameter is tested against a rule implemented in the code of the procedure/method. But Eiffel has always been confidential. And design by contract has only really popped-out with the latest versions of languages like .NET 2.0 C# and Ada 2012.
<br />
<br />
<br />
Going higher in complexity, we'll see next time the static analysis. Another method, which deserves its <a href="https://blog.cm-dm.com/post/2012/12/07/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-2">own article</a>!</p>https://blog.cm-dm.com/post/2012/11/30/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-1#comment-formhttps://blog.cm-dm.com/feed/atom/comments/73New GHTF essential principles: software validation added!urn:md5:20e48a0240c753b1b797d632d03ce3a52012-11-23T13:06:00+01:002012-11-26T18:10:34+01:00MitchRegulationsGHTFSoftware Validation<p>The Global Harmonization Task Force released an update of their guidance on <a href="http://www.imdrf.org/docs/ghtf/final/sg1/technical-docs/ghtf-sg1-n68-2012-safety-performance-medical-devices-121102.pdf">Essential Principles of Safety and Performance of Medical Devices</a>. It supersedes the last version released in 2005.</p> <p>To continue with the discussion about <a href="https://blog.cm-dm.com/post/2012/11/16/VV%3A-verification-validation%2C-where-s-the-truth">software validation in my three last posts</a>, this is a new proof of the rising concept of software validation.<br /></p>
<h4>New section added about software</h4>
<p>A quick diff on both versions of the guidance shows that there is news about software. They've created a new section about software: <em>Medical devices that incorporate software and standalone medical device software</em>.<br />
This section contains two requirements.</p>
<h5>PEMS left unchanged</h5>
<p>The requirement about Programmable Electronic Medical Systems (PEMS) is left unchanged, except a word about standalone software.</p>
<blockquote><p>Devices incorporating electronic programmable systems, including software, <strong>or standalone software that are devices in themselves,</strong> should be designed to ensure repeatability, reliability and performance according to the intended use. In the event of a single fault condition, appropriate means should be adopted to eliminate or reduce as far as reasonably practicable and appropriate consequent risks.</p>
<p></p></blockquote>
<h5>New requirement about software validation</h5>
<p>They added a new requirement about software validation, that is in many ways similar to the one found in the proposal of new EC directive.</p>
<blockquote><p>For devices which incorporate software or for standalone software that are devices in themselves, the software must be validated according to the state of the art taking into account the principles of development lifecycle, risk management, verification and validation.</p>
<p></p></blockquote>
<p><br />
IMHO, It's clear that software validation is becoming a hot subject!</p>https://blog.cm-dm.com/post/2012/11/23/New-GHTF-essential-principles%3A-software-validation-added%21#comment-formhttps://blog.cm-dm.com/feed/atom/comments/72V&V: verification & validation, doing it right.urn:md5:48e162ea5cac37fb906660986f5187862012-11-16T12:34:00+01:002012-11-30T12:08:57+01:00MitchProcessesdevelopment processSoftware ValidationSoftware Verification<p>Writing about V&V in <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-verification">two previous posts</a>, I had a lot of comments from people on a well-known social network. They made corrections to my view of V&V and brought their own definitions.<br />
Here is an excerpt of their comments.</p> <h4>Doing the right product and the product right</h4>
<p>Someone gave these two definitions:</p>
<ul>
<li>Validation is doing the right product.</li>
<li>Verification is doing the product right.</li>
</ul>
<p>I like these two definitions: they are concise and are good mnemonics.<br />
<strong>If there is one think you should remember from this post, it's these two definitions!</strong></p>
<h4>Validation is validating requirements</h4>
<p>Yes, to do the right product, it's necessary to validate the requirements.<br />
But not only the requirements, it's also necessary to validate that the final product matches the initial concept that was described in those requirements.</p>
<h4>Verification and validation aren't sequential</h4>
<p>Yes, verification and validation aren't sequential. Validation begins before verification. Even before coding. The first step of validation is validating requirements to ensure that the product is well defined.<br />
But verification ends before validation. I can't validate software that hasn't been verified from A to Z, before. How could I validate software with functions that haven't been tested?<br />
To reconcile everyone, I should have written in my last two posts that the end of verification happens before the end of validation. I edited my last two posts about V&V accordingly.<br /></p>
<h4>Validation is broader than verification</h4>
<p>Yes, they're true, validating a device goes beyond the scope of software (except for standalone software device). That's why some people talk about software validation and device validation as two separate concepts.<br />
But for software taken alone, the scope of software verification and software validation is:</p>
<ul>
<li>Software, and</li>
<li>Its documentation.</li>
</ul>
<p>Here's my rationale:<br />
Every input data:</p>
<ul>
<li>Intended use,</li>
<li>Risk assessment,</li>
<li>Regulatory requirements,</li>
<li>Usability requirements and,</li>
<li>Last but not least, user requirements, and so on ...</li>
</ul>
<p>Can be translated into more detailed requirements :</p>
<ul>
<li>Use case scenarios,</li>
<li>Functional requirements and non functional requirements, and</li>
<li>Documentation/labelling requirements.</li>
</ul>
<p>These more detailed requirements can be translated into:</p>
<ul>
<li>Architecture,</li>
<li>Interfaces and,</li>
<li>More detailed software requirements, and</li>
<li>Software units.</li>
</ul>
<p>Which are translated into:</p>
<ul>
<li>Software code,</li>
<li>Configuration data,</li>
<li>User documentation and administrator/maintenance documentation.</li>
</ul>
<p>All of these artifacts are tightly bound by traceability.<br />
So, when I verify software and its documentation, my software verification has the same scope as my software validation. And I ensure this is true through traceability from top-level requirements to most refined requirements, software units, software code and their tests.</p>
<h4>Why dissociating device V&V and software V&V?</h4>
<p>In all of this discussion, I made the assumption that device V&V and software V&V can be differentiated. One could argue that it's not relevant to make any difference. It's the device that everybody wants to validate, in the end.<br />
I think that device V&V and software V&V should be differentiated for technical reasons, when software is prominent in a device, e.g. when up to 50% of requirements are addressed by software:</p>
<ol>
<li>Though everybody wants to minimize it, software is a source of complexity, hence users tend to think it's possible to add new functions with a few mouse clicks of an engineer,</li>
<li>Software development process has its own pace. It's possible to have prototypes very quickly (also true with hardware and fast prototyping) but it takes a lot of time and rework to make a usable product,</li>
<li>Using simulators, it's not necessary to have the final hardware ready to verify and validate software,</li>
<li>Validating software includes validating its graphical user interface, which can be a long process spent with end-users, if the GUI is complex,</li>
<li>Testing software takes a long time and it's difficult to anticipate all software failures.</li>
</ol>
<p>I could have found tons of arguments to show that validating software can be separated from validating a device.<br />
The last argument I could use: it's because regulations ask me to do so. The CE Mark directive demands software validation (Annex I.I.12.1.a of current directive and even more in the future directive to be released in 2014). The 21.CFR 820.30 (g) requires the same in the US and the FDA released 15 years ago a guidance about General Principles of Software Validation that is still in force.<br /></p>
<h4>So, where is the truth?</h4>
<p>I haven't seen definitions of software verification and software validation that are accurate. Every company or consultant has its own recipe (I'm provocative). They all work, so far, as users are happy with most of devices placed on the market.<br />
Seeking for a common definition of terms that we use every day, like software verification and software validation, would be a good way to:</p>
<ol>
<li>Describe best practices,</li>
<li>Have people apply these practices.</li>
</ol>
<p>Such a job goes beyond what I can do in this blog! It could be a subject of update of the IEC 62304 standard. Today it stops at the end of software verification. Perhaps it could add definition and requirements for software validation.</p>https://blog.cm-dm.com/post/2012/11/16/VV%3A-verification-validation%2C-where-s-the-truth#comment-formhttps://blog.cm-dm.com/feed/atom/comments/71What is software validation?urn:md5:0088832b3f7e9ca2d2697960ac6a4c272012-11-02T14:11:00+01:002012-11-13T19:44:02+01:00MitchProcessesdevelopment processIEC 62304Software ValidationSoftware Verification<p>Following <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-verification">the article about software verification</a>, let's see what software validation is.</p> <h3>Validation after verification</h3>
<p>If you've read the other article this is no news for you that the end of validation happens after the end of verification.<br />
In fact, validating a device is ensuring that it conforms to defined user needs and intended uses. In the light of this definition, verification is a part of the whole process of validation.<br />
Before ensuring that it conforms to user needs, the functions of the device have to be:</p>
<ol>
<li>described with software requirements and architecture,</li>
<li>implemented with code,</li>
<li>and tested.<br /></li>
</ol>
<p>So the requirements and the architecture have to be validated before they're implemented. That's why there are reviews of requirements, general conception and detailed conception before verification. These reviews participate to the validation process.<br />
When requirements and architecture are validated, the software is implemented and tested through verification.<br />
However verification tests as they are required by the IEC 62304 standard are not enough. Specific tests, i.e. clinical tests or end-user tests in real conditions, can be done to validate the device. So yes, some activities of validation happen after verification.<br />
<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-05_m.jpg" alt="software in medical devices - Validation includes verification activities plus additional tests" style="display:block; margin:0 auto;" title="software in medical devices - Validation includes verification activities plus additional tests, oct. 2012" />
<br /></p>
<h4>Goals of validation</h4>
<p>The goals of validation are not purely technical, compared to those of verification. Validation shall answer these questions:</p>
<ul>
<li>Does software conform to its intended use?</li>
<li>Is clinical use effective and efficient?</li>
<li>Are risks mitigated?</li>
<li>Is the risk / benefit ratio favorable?</li>
<li>Are the requirements enforced by national regulations met?</li>
</ul>
<p>Knowing these goals, it's easy to see that pure technical tests done on a tests platform are not enough.</p>
<h4>Who does Validation tests</h4>
<p>Since validation tests are not pure technical tests, it's the role of physicians, people with clinical knowledge to do validation tests. For this reason, validation tests are done in real environment, i.e. in healthcare centers.<br />
Software teams don't do validation tests but they support them. If there is a bug a problem with software during validation, it's the role of software developers to find out what's wrong. And fix it!<br /></p>
<h3>Activities done for validation</h3>
<p>Let's see what activities are required after the verification to complete the validation.</p>
<h4>Clinical tests</h4>
<p>The most obvious type of activity is clinical tests. From the point of view of IEC 62304, supplying software for clinical tests is equivalent to delivering software to end-users. Thus clinical tests are the beginning of software maintenance regarding IEC 62304.<br />
Practically, the technical conditions in which software is used in clinical tests can be very close those of verification. Especially for standalone software: an end-user with clinical knowledge tests software on a PC.<br />
The first main difference is the testing protocol, which, for clinical tests is highly formalized. The second main difference is the use of software with real patients. This is not 100% true, though, with standalone software. Standalone software can be tested with real data sets that were archived even before software was designed.</p>
<h4>No clinical tests</h4>
<p>Depending on the regulations in the countries where software will be sold, clinical tests may not be necessary. Usually software, which already have an equivalent on the market can be validated with existing clinical data.<br />
In this case there are no clinical tests. Software testing either stops after the end of verification, or finishes by a last phase of free tests in simulated clinical conditions.<br />
<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-06_m.jpg" alt="software in medical devices - Tests in validation: Clinical Tests or Free tests in simulated clinical conditions" style="display:block; margin:0 auto;" title="software in medical devices - Tests in validation: Clinical Tests or Free tests in simulated clinical conditions, oct. 2012" /></p>
<h4>Software validation and Device validation</h4>
<p>When software is embedded in a device, there may be two types of validation:</p>
<ul>
<li>software validation: validation of software only,</li>
<li>device validation: validation of the device with software inside.</li>
</ul>
<p>So, there may be only one validation of the whole device or two validations: one for software and one for the device. It depends deeply on the type and complexity of software embedded in the device.<br />
If the functions ensured by software are very complex and/or critical, a separate validation of software may be deemed necessary. Separating software validation is a way to reduce the complexity of the validation of the whole device.<br />
<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-07_m.jpg" alt="software in medical devices - software validation and system/device validation" style="display:block; margin:0 auto;" title="software in medical devices - software validation and system/device validation, oct. 2012" /></p>
<h3>Validation review</h3>
<p>The validation ends with a validation review with people who participated to the design, the verification tests, and the validation tests. An "independent reviewer" (someone who didn't participate to design and validation tasks) may also be required by some national regulations. The purpose of the validation review is ensuring that all goals enumerated above are met:</p>
<ul>
<li>software conforms to its intended use,</li>
<li>clinical use is effective and efficient,</li>
<li>risks are mitigated,</li>
<li>the risk / benefit ratio is favorable,</li>
<li>the requirements enforced by national regulations are met.</li>
</ul>
<p>For embedded software, there may be two validation reviews, if software is validated separately before the whole device.<br />
<br />
The validation review marks the end of the whole process of validation. Usually people use the word "validation" with both meanings: the big validation process or the validation review. The validation review is the successful end of efforts stretched on months or years. Thus some people think of validation with the meaning of validation review.<br />
<br />
<br />
<br />
When software is validated, the phase of maintenance begins. But this is another story I'll tell in another article!</p>https://blog.cm-dm.com/post/2012/08/30/What-is-software-validation#comment-formhttps://blog.cm-dm.com/feed/atom/comments/63What is software verification?urn:md5:1e75b060409afc3e60b8cff084082ceb2012-10-26T14:09:00+02:002012-11-13T19:29:42+01:00MitchProcessesAgiledevelopment processIEC 62304Software ValidationSoftware Verification<p>Many people make the confusion between verification and validation. There is no exception for software! I'd even say that the confusion is even worse for standalone software.<br />
<br />
Let's see first the definition of verification and validation. I borrowed these definitions from the FDA website:</p>
<ul>
<li>Verification is confirming that design output meets the design input requirements,</li>
<li>Validation is ensuring that the device conforms to defined user needs and intended uses.</li>
</ul>
<p>OK, this remains theoretical. How to do that with software medical devices?<br />
In this article I focus on verification and will focus on validation in the next article: <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-validation">What is software validation</a>.</p> <h3>Verification before validation</h3>
<p>This is very basic but I wanted to repeat it: <strong>the end of verification happens before the end of validation</strong>.<br />
If we place the two activities in the software development cycle, verification happens after coding and software integration activities. Hence there's nothing to verify or validate as long as there's no software or documentation.<br />
However, the position of verification activities depends deeply on the software development cycle.</p>
<h4>Position in the waterfall software development</h4>
<p>In the classical waterfall software development cycle, the verification activities are placed after coding activities. Thus the verification encompasses:</p>
<ul>
<li>Unit tests,</li>
<li>Integration tests,</li>
<li>Alpha 1 tests,</li>
<li>Alpha n tests (if any) ...</li>
<li>Beta 1 tests,</li>
<li>Beta n tests (if any) ...</li>
</ul>
<p>The naming of the tests phases is given here as an example. You may name your test phases as you wish.<br />
Just remember that there's no tests phase in real clinical conditions in this example.<br />
<br />
The diagram below summarizes that. Verification activities are highlighted with a shaded blue background. Note that the shaded blue:</p>
<ul>
<li>covers a part of coding and integration. This is where unit and integration tests take place,</li>
<li>doesn't cover the end of the cycle (where validation happens).<br /></li>
</ul>
<p><br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-01_m.jpg" alt="Software in medical devices - Software verification Encompasses unit tests, integration tests, and alpha, beta tests" style="display:block; margin:0 auto;" title="Software in medical devices - Software verification Encompasses unit tests, integration tests, and alpha, beta tests, oct. 2012" /></p>
<h4>Position in agile software development</h4>
<p>Agile development is a continuous development cycle. Every activity of software development happens during each iteration.<br />
If an iteration implements new functions, these functions have to be verified. Even if an iteration is only made of bug fixes, these fixes have to be verified. So, verification happens during each iteration. <br />
The diagram below highlights with a shaded blue background the verification activities in an iteration.<br />
<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-02_m.jpg" alt="Software in medical devices - Software verification activities in agile iteration" style="display:block; margin:0 auto;" title="Software in medical devices - Software verification activities in agile iteration, oct. 2012" />
<br />
But iterations are only the core of agile software development in regulated environment. As I already explained <a href="https://blog.cm-dm.com/post/2012/05/12/How-to-develop-medical-device-software-with-agile-methods">in the series of articles How to develop medical device software with agile methods</a>, iterations shall be followed by a phase of consolidation.<br />
The consolidation aims at verifying that what was developed earlier is a consistent, risk free and bug free software.<br /> So the consolidation phase also contains verification activities.<br />
Whereas iterations contain incremental verification, consolidation contains a second pass of verification of software.<br />
<br />
The diagram below summarizes that. In shaded blue, the phases where verification happens.<br />
<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-03_m.jpg" alt="software in medical devices - Continuous software verification during Iterations and in Consolidation" style="display:block; margin:0 auto;" title="software in medical devices - Continuous software verification during Iterations and in Consolidation, oct. 2012" /></p>
<h3>IQ, OQ and PQ</h3>
<p>Software is not present only in the medical devices but also in the production plants of medical or pharmaceutic products.<br />
When software is used in an equipment to control the production or manufacturing of products, it has to be verified and validated.<br />
The canonical model of verification of such equipment (software or not) is made of three phases:</p>
<ol>
<li>Installation Qualification,</li>
<li>Operational Qualification,</li>
<li>Performance Qualification.</li>
</ol>
<p>Note that the word <em>qualification</em> is used instead of <em>verification</em>. I won't argue on the differences between those two words.<br />
This model is a very robust one and we could get some ideas of it to verify medical device software.</p>
<h4>Installation qualification</h4>
<p>The Installation Qualification verifies that the equipment was installed according to installation drawings and specifications.<br />
For software, this means that your verification tests shall include the install procedures. One or more tests shall exists and verify the installation of software.</p>
<h4>Operational Qualification</h4>
<p>The Operational Qualification includes tests cases of start-up, operational use, maintenance, safety functions, alarms of an equipment.<br />
For software this means that your verification tests shall include tests of all these functions and states. Your verification tests shall also include the inspection of software documentation delivered to the end-user.<br />
Software verification is not limited to the main functions. All functions including the less used shall be tested one by one. Same for software documentation that shall be inspected to verify it contains information required to use your software in safe conditions.</p>
<h4>Performance Qualification</h4>
<p>The Performance Qualification aims at verifying that the user requirements and safety requirements are fulfilled.<br />
For software this means that your verification tests shall contain scenarios to test user requirements and safety requirements. These scenarios shall be close to real use, compared to the tests of functions one by one made in the previous phase.</p>
<h4>Software verification categories</h4>
<p>To sum-up what I said above, the software tests cases shall contain verification of:</p>
<ul>
<li>Software installation,</li>
<li>Software documentation delivered to end user,</li>
<li>Functions of software one by one,</li>
<li>Scenarios of use of software.</li>
</ul>
<p>On top of that, we can add the verification made by:</p>
<ul>
<li>Unit or automated tests and,</li>
<li>Integration tests.</li>
</ul>
<p>Unit and integration tests should be planned before other tests mentioned above. That's logical, when you integrate software, you can't test the installation procedure or the functions from A to Z.</p>
<h3>Software embedded or standalone</h3>
<p>Verification activities are different if software is embedded or standalone.<br /></p>
<h4>Standalone software</h4>
<p>For standalone software, all tests described above follow their "own" planning. There are limited constraints to the planning of tests. Most of time, only test data and/or system in interface should be missing or - worse - end users who do the test scenarios.<br />
Test data, systems in interface and end-users are definitely the critical resources of tests.</p>
<h4>Embedded software</h4>
<p>For embedded software, the constraints on tests are usually more important. The planning of software tests depends on the availability of hardware.<br />
The verification is usually split in two phases:</p>
<ul>
<li>Tests in simulated hardware environment,</li>
<li>Tests with real hardware.</li>
</ul>
<p>Both phases may contain the tests sub-phases described above: installation, documentation, functions one by one, and so on. This can make the planning of verification tests quite long and complex.<br />
About critical resources, embedded software have those of standalone software and add the hardware, which the software runs on.<br /></p>
<h3>Who verifies it and where?</h3>
<h4>Who?</h4>
<p>People who test software aren't the same in every phase of verification.<br /></p>
<h5>Different people at different stages</h5>
<p>During the early stages of verification, tests are done by engineers:</p>
<ul>
<li>Unit tests by software developers,</li>
<li>Integration tests by integrators (or software testers),</li>
<li>Deep testing of functions one by one by software testers (or integrators)</li>
</ul>
<p>During the late stages of verification, tests are by people with knowledge of operational use of software:</p>
<ul>
<li>Test of scenarios by product champions (or software testers with operational knowledge or selected end-users).</li>
</ul>
<p><img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-04_m.jpg" alt="Software in medical devices - Software verification is made by software engineers, integrators, testers, end-users" style="display:block; margin:0 auto;" title="Software in medical devices - Software verification is made by software engineers, integrators, testers, end-users, oct. 2012" /></p>
<h5>Operational not clinical</h5>
<p>If you re-read the paragraph before the diagram just above, you'll see that I wrote "operational use", neither "clinical use" nor "medical use".<br />
This distinction is very important. Even if it is of high value to have someone with clinical knowledge who tests software during verification phases, <strong>there is absolutely neither clinical nor medical use of software during verification</strong>.<br />
<br />
It is possible to have a doctor who tests your software with scenarios, or even free tests during verification.<br />
With a standalone software, you may have the feeling that the conditions are the same (basically a doctor using software on a PC), whether in verification or in validation.<br />
This is where I think resides the confusion between verification and validation, especially for standalone software.<br />
But I'm going to far. I should leave matter for my next article!</p>
<h4>Where?</h4>
<p>Software passes through various locations and platforms before being installed in the machine/PC of the end-user.<br />
The platforms on where software is installed depends on the test phase and whether software is standalone or not.<br />
For standalone software:</p>
<ul>
<li>Unit tests on software development platform,</li>
<li>Software Integration tests on software development platform or software integration platform,</li>
<li>Deep testing of functions one by one on software integration platform or on test platform with real interfaces,</li>
<li>Test of scenarios on test platform with real interfaces.</li>
</ul>
<p>For embedded software:</p>
<ul>
<li>Unit tests on software development platform,</li>
<li>Software Integration tests on software development platform or software integration platform,</li>
<li>Hardware + Software Integration tests on hardware integration platform,</li>
<li>Deep testing of functions one by one on hardware/software integration platform or on test platform with real equipment,</li>
<li>Test of scenarios on test platform with real equipment.</li>
</ul>
<p>Again, even if tests are realized on the real equipment/PC with an end-user with medical knowledge, they remain verification tests.<br />
All of this is not a secret, there's a lot about testing in literature. I just wanted to stress out what verification is.<br />
<br />
<br />
As we've seen here, the verification tests are exclusively technical. I never talked about intended use or clinical assessment and how they intersect/match with verification tests. This is the purpose of <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-validation">my next article about software validation</a>.</p>https://blog.cm-dm.com/post/2012/08/30/What-is-software-verification#comment-formhttps://blog.cm-dm.com/feed/atom/comments/62New templates: Software Tests Plan, Software Tests Description and Software Tests Reporturn:md5:656f3a84483c30865d9b9ba4b230087d2012-01-30T19:25:00+01:002012-08-30T13:45:11+02:00MitchTemplatesSoftware ValidationTemplate <p>In <a href="https://blog.cm-dm.com/post/2011/11/09/Software-Requirements-Specifications">Software-Requirements-Specifications</a>, I put everything you need to deal with technical requirements of your software.</p>
<p>In <a href="https://blog.cm-dm.com/post/2012/01/04/Template%3A-Software-Design-Description">Software-Detailed-Design</a>, I put everything about the design description of packages and component inside software, their interfaces and their workflows.</p>
<p>In <a href="https://blog.cm-dm.com/post/2012/01/18/Template%3A-System-Architecture-Description">System Architecture Description</a>, I put everything about the description of the whole architecture of system and software.</p>
<p>Now that we have enough templates to describe our software, let's see how to test it.
How one can release software without thourougly testing it? Nobody, I think.
This is the purpose of these 3 software tests templates.</p>
<ul>
<li>The <a href="https://blog.cm-dm.com/public/Templates/software-test-plan-template.doc">Software Test Plan</a> contains all information about the organization of tests and the traceability with SRS requirements.</li>
<li>The <a href="https://blog.cm-dm.com/public/Templates/software-test-description-template.doc">Software Test Description</a> contains the detaled description of each test.</li>
<li>The <a href="https://blog.cm-dm.com/public/Templates/software-test-report-template.doc">Software Test Report</a> contains the results of tests.</li>
</ul>
<p>Note: Test = Verification, I could have called those documents</p>
<ul>
<li>Software Verification Plan,</li>
<li>Software Verification Description,</li>
<li>Software Verification Report.</li>
</ul>
<p>The next one will be about delivery.
Stay tuned, my list of templates is growing fast!</p>
<p><strong>I gather all my templates on the <a href="https://blog.cm-dm.com/pages/Software-Development-Process-templates">templates page</a>.</strong> You'll find them all there.</p>
<p>Please, feel free to give me feedback on my e-mail contact@cm-dm.com</p>
<p>I share this template with the conditions of <a href="https://blog.cm-dm.com/post/2011/11/04/License">CC-BY-NC-ND license</a>.</p>
<br />
<br />
<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/fr/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/fr/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/fr/">Creative Commons Attribution-NonCommercial-NoDerivs 3.0 France License</a>.
https://blog.cm-dm.com/post/2012/01/30/New-template%3A-Software-Tests-Plan#comment-formhttps://blog.cm-dm.com/feed/atom/comments/33