Software in Medical Devices, by MD101 Consulting - ProcessesBlog about software medical devices and their regulatory compliance. Main subjects are software validation, IEC 62304, ISO 13485, ISO 14971, CE mark 93/42 directive and 21 CFR part 820.2024-03-27T15:32:28+01:00Cyrille Michaudurn:md5:9c06172e7cd5ed0f5b192883b657eabbDotclearComputer Software Assurance for Production and Quality System Softwareurn:md5:0de0001bc23bf1fe57822f039d4c79de2022-11-11T13:24:00+01:002022-11-11T13:24:00+01:00MitchProcesses<p>That's a bit like a new album of your favorite group. You've been waiting it for years. At last, it's been released! Are you going to be be impressed or disappointed?<br />
The draft FDA guidance on <em>Computer Software Assurance [CSA] for Production and Quality System Software</em> has been published in September 2022. At last!</p> <h4>Scope and purpose</h4>
<p>First, this guidance is neither about Software as a Medical device (SaMD, a.k.a. standalone software medical device), nor about Software in Medical device (SiMD, a.k.a. embedded software).<br />
This guidance in about software in used in QMS processes and production processes. It addresses 21 CFR 820.70(i) requirement. From an ISO 13485 perspective, it addresses clauses 4.1.6 and 7.5.6.<br />
<br />
The reason why this guidance has been published is explained in the <em>Background</em> section. The use of software to automate the realization or the monitoring of processes is quite common. The FDA felt the need to supplement the section 6 of the existing guidance on <em>General Principles of Software Validation</em> (GPSV). It's true that this existing guidance, published in 2002 suffers of its age. It is a bit vague on the validation process to establish. Manufacturers had to rely on other guidance like AAMI TIR 36 or IEC/TR 82002-2 to define their QMS software validation procedures.<br />
Even though the draft FDA guidance brings some new recommendations, the AAMI and IEC guidances are still useful. We can view this draft guidance as a complement of information on how to validate QMS software to be in line with regulatory requirements.</p>
<h4>Similarities with medical device software</h4>
<p>This CSA draft guidance works pretty like medical device software guidance. It requires:</p>
<ul>
<li>An intended use,</li>
<li>A risk-based approach, like in the <em>Content of premarket submissions for device software functions</em> draft guidance,</li>
<li>A modular approach, like the <em>Multiple Function Device Products: Policy and Considerations</em> guidance,</li>
<li>And a testing strategy resulting from the two above, like the GPSV guidance</li>
</ul>
<h4>Intended use</h4>
<p>That's where all begins, like medical device software (MDSW).<br />
You can use a QMS software to record your recalls and only rely on it. But you may continue to have hardcopy records, using that software only to have a convenient workflow. Same software, different intended uses, the CSA approach will be different as well.<br />
<br />
Logically, the draft guidance also states that 21 CFR 820.70(i) doesn't apply to software, which is neither directly used in, nor supports production production or QMS. Namely software outside the scope of your QMS. <br />
Surprisingly, the draft guidance quotes software intended to support business continuity as such software. Beware about that example! If the same software supports business continuity and record retention, then 21 CFR 820.70(i) is applicable.</p>
<h4>Risk-based approach and modular approach</h4>
<p>The new recommendations (not so new, the way of thinking is already applied by manufacturers veterans in QMS software validation) stem from a risk-based approach. This risk-based approach is based on the intended use of the software.<br />
We can see here a similar approach to the one we have for medical device software in the <a href="https://blog.cm-dm.com/post/2021/12/06/FDA-Draft-Guidance-on-Content-of-Premarket-Submissions-for-Device-Software-Functions">draft guidance on content of premarket submissions for device software functions</a>; a twofold approach (emphasis below from FDA):</p>
<ul>
<li>High process risk software: <em>Software used <strong>directly</strong> as part of production or QMS</em>,</li>
<li>Not high process software: <em>Software used to <strong>support</strong> production or QMS</em>.</li>
</ul>
<p>Note that the wording <em>low risk</em> isn't present in the guidance.<br />
<br />
This looks like what we have in gestation for medical device software, a twofold approach based on the intended use of the medical software function:</p>
<ul>
<li>Basic software documentation level, for low-risk medical software function,</li>
<li>Enhanced software documentation level, for high-risk medical software function.</li>
</ul>
<p>We can see here a new paradigm (or a new fashion) appearing with this binary approach to all kinds of software by the FDA.<br />
Exit the major / moderate / minor levels of concern!<br />
<br /></p>
<h4>Reasonably foreseeable risk</h4>
<p>The draft guidance also insists on the way risks should be managed. Organisations should <em>consider which failures are reasonably foreseeable (as opposed to likely)</em>. We see the difference of treatment between medical device software risks, versus production or QMS software risks.<br />
To say it (a bit too) short:</p>
<ul>
<li>Every medical device risk shall be treated seriously, hence they lead to direct (or indirect for SaMD) harms to patients,</li>
<li>Production and QMS risks can be taken with more flexibility when they aren’t reasonably foreseeable,</li>
<li>Production and QMS risks shall be taken seriously, when they are reasonably foreseeable and result in the <em>potential to compromise the production or the [QMS]</em>.</li>
</ul>
<p>A similar approach with severity and frequency, borrowed from ISO 14971, can be taken to assess production or QMS software risks:<br />
Taking QMS / produbtion software failure as the initial step in the sequence of events, we can draw several sequences of events depending of the intended use:<br />
<br />
With QMS software:</p>
<ul>
<li>Sequence
<ul>
<li>Data Corruption / Loss Re. Process (records, traceability…) E.g.: loosing archived batch records</li>
</ul></li>
<li>Consequence
<ul>
<li>Regulatory non-conformity</li>
</ul></li>
</ul>
<p><br />
With production software:</p>
<ul>
<li>Sequence
<ul>
<li>Production data Corruption / Loss E.g.: labeling, control card</li>
</ul></li>
<li>Consequence
<ul>
<li>Product non-conformity or production process non-conformity</li>
</ul></li>
</ul>
<p><br />
Or service support software:</p>
<ul>
<li>Sequence
<ul>
<li>Data Corruption / Loss Re. service (service interventions, complaints…) E.g.: not treating complaints</li>
</ul></li>
<li>Consequence
<ul>
<li>Service process Non-conformity and possibly missed adverse event</li>
</ul></li>
</ul>
<h4>Mitigation action outside software</h4>
<p>The draft guidance also makes use of the principle of mitigation action to decrease the QMS / production software risk profile.<br />
No need to have a hardware mitigation action in place, like in most of software embedded in an electromedical device. Some <em>additional controls or mechanisms</em> can be implemented, provided that they are effective. For example, human awareness or review is accepted (provided that operators are trained and so on, this is another story).<br />
This is similar to the type of mitigation action accepted by clause 4.3.a of IEC 62304:2006 + Amd1:2015.</p>
<h4>Modular approach</h4>
<p>It is possible to split software into several modules, features or functions. Each function may have a different intended use, with a different risk profile. The guidance gives an example with a spreadsheet. I find the example of an ERP more handy. An ERP has several modules. Some are not QMS software (accounting). Some are QMS software with various functions, like a CRM module. If this module is used to manage customer complaints, some high risk profile is expected for the subset of functions managing customer complaints. But not for functions managing other aspects of CRM.<br />
This logic is similar to the <a href="https://blog.cm-dm.com/post/2020/09/04/FDA-Guidance-on-Multiple-Function-Device-Products">Multiple Function Device Products: Policy and Considerations</a> guidance.</p>
<h4>Bespoke software</h4>
<p>The draft guidance contains nothing clear about bespoke software; namely software internally developed or outsourced to a software development firm.<br />
You won't find a word about design qualification, a step often found in QMS software validation procedures. A practical solution for bespoke software it to follow a software development lifecycle similar to IEC 62304 class A.</p>
<h4>Expected records for high-risk software</h4>
<p>The draft guidance gives some examples for high-risk and not high-risk software. High risk software records list looks quite similar to what is expected by clause 5.7 of IEC 62304.<br />
Some differences, however, are present for not high-risk software. The FDA accepts some reduced of even very abbreviated test plans and test reports. They use the wording <em>unscripted testing</em>, something disallowed by IEC 62304.<br />
<br />
The draft guidance also contains a discussion about types of testing strategies. No worries if you don't use them. It's quite common to use robust scripted testing for high risk, limited scripted testing for moderate risk and unscripted testing for low risk software. These testing strategies are there for information.</p>
<h4>No IQ/OQ/PQ!</h4>
<p>You will not find these concepts in this guidance. No need to disguise your software validation process in a tweaked IQ/OQ/PQ process, commonly used for hardware equipments! Most of validation procedures make use of this IQ/OQ/PQ artefact. It is an easy way to make software-illiterate auditors happy (that kind of beast tends to disappear).<br />
Thanks to this FDA guidance, we can do a plain old Software test plan and a software test report, getting rid of this IQ/OQ/PQ behemoth!<br /></p>
<h4>21 CFR part 21</h4>
<p>The draft guidance reminds the reader that some data (most of data) treated / stored by QMS / production software are electronic records according to 21 CFR part 11. The guidance doesn’t give any cue on hiw to validate software functions in the scope ot part 11.<br />
I suggest to take that seriously and not hesitate to put some effort on part 11 validation protocol (using robust script testing and a detailed traceability matrix to part 11 requirements), despite the least burdensome approach claimed by the FDA.</p>
<h4>Conclusion</h4>
<p>We have a clear least burdensome approach in this draft guidance! The previous GPSV guidance left the manufacturer in an unclear situation about the extent of validation to perform with various QMS / production software risk profiles. And the AAMI TIR 36 or IEC/TR 82002-2 didn’t give the FDA view. <br />
<br />
With this new guidance: we know we can pull the brakes on low-risk software, and use the two other guides as a source of information on how to validate high-risk software.</p>https://blog.cm-dm.com/post/2022/11/11/Computer-Software-Assurance-for-Production-and-Quality-System-Software#comment-formhttps://blog.cm-dm.com/feed/atom/comments/266Software release vs design transferurn:md5:eba8277b91673dea146297932b3566b22020-05-15T14:00:00+02:002020-05-17T17:52:31+02:00MitchProcessesdevelopment processIEC 62304ISO 13485<p>A recurring question is the confusion, or more precisely the difference between software release of IEC 62304, and design transfer of ISO 13485.</p> <p>A short answer is that design transfer happens after a release, but releases don't need to be attached to design transfer.<br />
When a software development team releases a release candidate (RC) version or an alpha or a beta version (choose your nomenclature, I prefer RC), we're still in design. The product isn't validated yet.</p>
<p><img src="https://blog.cm-dm.com/public/26-release-vs-design-transfer/.release-and-design-transfer-embedded-sw_m.png" alt="release-and-design-transfer-embedded-sw.png, May 2020" style="display:table; margin:0 auto;" /></p>
<p>This is quite straightforward for embedded systems: the steps can be easily separated. The software is delivered to the system team for system-level tasks like integration or validation. During these tasks, additional releases may be delivered by the software team, to fix integration or functional bugs. Likewise, releases may be delivered during clinical investigations, with a very constrained change protocol, though.
<img src="https://blog.cm-dm.com/public/26-release-vs-design-transfer/.release-design-transfer-place-market_m.png" alt="release-design-transfer-place-market.png, May 2020" style="display:table; margin:0 auto;" /></p>
<p>After the final release, the design is frozen and is transferred to industrialization. When industrialization processes are validated and the product is cleared by regulatory authorities, the product is placed on the market.</p>
<p>For software as a medical device (SaMD), releases are prone to be more frequent, as there is not a hardware specific to the device. Since there is no industrialisation step of a prototype, like methods for machining, the release and the design transfer can be merged into a single step.<br />
Thus, the final software release and the design transfer can be performed at once during a single review.
<img src="https://blog.cm-dm.com/public/26-release-vs-design-transfer/.SaMD-release-design-transfer-place-market_m.png" alt="SaMD-release-design-transfer-place-market.png, May 2020" style="display:table; margin:0 auto;" /></p>
<p>Likewise, the software deployment before placing it on the market can be a very short task, like uploading the installation bundle on the public server.<br />
<br />
However, this is possible for small scale software system. For large-scale software system, the hardware system level can be replaced by a software system level, made of subsystems. Multiple releases of subsystems can happen before the final integration and validation of the whole system. For example, a subcontractor can deliver the release of their subsystem, before the integration and final release of the whole system.<br />
The final release is dissociated from the design transfer, usually to complete the software documentation, like IFU's and supporting documents.
<img src="https://blog.cm-dm.com/public/26-release-vs-design-transfer/.big-system-release_m.png" alt="big-system-release.png, May 2020" style="display:table; margin:0 auto;" /></p>
<p>Another case where the design transfer can be dissociated from a release for SaMD is when that software is installed in the cloud. The SaMD can be released and tested on a pre-production or staging platform. Then the design transfer happens when the system is deployed to the production platform. (Note that when the software is in production in the cloud, is has to be validated according to 4.1.6 of ISO 13485).
<img src="https://blog.cm-dm.com/public/26-release-vs-design-transfer/.release-design-transfer-cloud_m.png" alt="release-design-transfer-cloud.png, May 2020" style="display:table; margin:0 auto;" /></p>
<p>To sum up, it's possible to merge final release with design transfer for small scale projects. For large scale projects, intermediate tasks require to dissociate the final release from the design transfer.</p>https://blog.cm-dm.com/post/2020/05/15/Software-release-vs-design-transfer#comment-formhttps://blog.cm-dm.com/feed/atom/comments/236Eudamed Software Development LifeCycleurn:md5:310680f9e099b7cdbef2a1340e41e37d2019-11-09T20:38:00+01:002019-11-09T20:39:56+01:00MitchProcesses <p>This is the Eudamed Software Development LifeCycle.<br />
<br />
<a href="https://blog.cm-dm.com/public/Eudamed_Software_Development_Lifecycle.png" title="Eudamed Software Development Lifecycle.png, Nov 2019"><img src="https://blog.cm-dm.com/public/.Eudamed_Software_Development_Lifecycle_m.png" alt="Eudamed Software Development Lifecycle.png, Nov 2019" style="display:table; margin:0 auto;" /></a><br />
<br />
<br />
Welcome to the world of software engineering, Eudamed!</p>https://blog.cm-dm.com/post/2019/11/09/Eudamed-Software-Development-LifeCycle#comment-formhttps://blog.cm-dm.com/feed/atom/comments/230IEC 62366-1 and Usability engineering for softwareurn:md5:c139439305960e140a1a5a99fd6ddf622018-07-06T13:41:00+02:002018-07-14T13:49:59+02:00MitchProcessesAgileIEC 62366risk management<p>Usability is a requirement, which has been present in regulations since a long time. It stems from the assessment of user error as a hazardous situation. It is supported by the publication AAMI HE75 standard, FDA guidances, and the publication of IEC 62366 in 2008 followed by IEC 62366-1:2015.
Although usability engineering is a requirement for the design of medical devices, most of people designing software are not familiar with this process. This article is an application of the process described in IEC 62366-1 to software design.</p> <p>Before applying this without critical thinking, please take note that what is described below may not be enough for cases where use errors can have severe consequences, e.g. devices intended to be sold to end-users directly. In such cases, requesting the services of specialists in human factors engineering is probably the best solution.</p>
<h4>Usability engineering plan</h4>
<p>Write what you do, do what you write. The story begins with a plan, as usual in the quality world.
The usability engineering plan shall describe the process and provisions put in place. For standalone software, this process lives in parallel to the software design process. The usability engineering plan can be a section of the software development plan, or a separated document.
The usability engineering plan describes the following topics:</p>
<ul>
<li>Input data review,</li>
<li>Definition of use specification,</li>
<li>Link with risk management,</li>
<li>User interface specification,</li>
<li>Formative evaluation protocol,</li>
<li>Formative evaluation report,</li>
<li>Design review(s),</li>
<li>Summative evaluation protocol,</li>
<li>Summative evaluation report,</li>
<li>Usability validation.</li>
</ul>
<p>Note: you can use the structure and content below in this article to write your own usability engineering plan (if you can afford not to pay for usability engineering specialists :-)).</p>
<h4>Usability input data</h4>
<p>Usability input data is a subset of design input data. They are gathered before or at the beginning of the design and development project. Depending on the context of the project, they can contain:</p>
<ul>
<li>Statements of work,</li>
<li>User requirements collected by sales personnel, product managers …,</li>
<li>Data from previous projects,</li>
<li>Feedback from users on previous versions of medical devices,</li>
<li>Documentation on similar medical devices,</li>
<li>Specific standards, like IEC 60601-1-8 Home use of electromedical devices,</li>
<li>Regulatory requirements, like IFU or labeling.</li>
</ul>
<p>Usability input data are reviewed along with other design input data. So, you should include these data in your design input data review.</p>
<h4>Preparing the use specification</h4>
<p>The use specification is a high-level statement, which contains information necessary to identify:</p>
<ul>
<li>the user groups which are going to be subject of the usability engineering process,</li>
<li>the use environment in which the device is going to be used,</li>
<li>the medical indications which are needed to be explored further.</li>
</ul>
<p>The use specification shall include the:</p>
<ul>
<li>Intended medical indication;</li>
<li>Intended patient population;</li>
<li>Intended part of the body or type of tissue applied to or interacted with;</li>
<li>Intended user profiles;</li>
<li>Use environment; and</li>
<li>Operating principle.</li>
</ul>
<p>Preparing the use specification can make use of various methods, for example:</p>
<ul>
<li>Contextual enquiries in the user's workplace,</li>
<li>Interview and survey techniques,</li>
<li>Expert reviews,</li>
<li>Advisory panel reviews.</li>
</ul>
<p>Usually, the use specification is prepared with expert reviews. This method is the simplest to implement (once again if you can afford not to use other methods :-))</p>
<p>The use specification is recorded in the usability management file.</p>
<h4>Analysis</h4>
<p>The usability engineering process is performed in parallel to the ISO 14971 risk management process.
Below is a diagram showing the links between the risk management process and the usability engineering process. This diagram is non-exhaustive and for clarification purposes only.
<img src="https://blog.cm-dm.com/public/24-IEC-62366-1/.IEC-62366-1-and-ISO-14971-relationships_m.png" alt="IEC-62366-1-and-ISO-14971-relationships.png" style="display:table; margin:0 auto;" title="IEC-62366-1-and-ISO-14971-relationships.png, May 2018" />
I didn’t represent the software development process on this diagram. This is not the purpose of this article to show the relationships between software development and risk management. Please, have a look at <a href="https://blog.cm-dm.com/post/2012/08/01/How-to-deal-with-ISO-14971-in-a-software-company">this post on ISO 14971</a> if you’re keen at refreshing your memory on software development.</p>
<h5>Identifying characteristics for safety</h5>
<p>This step sounds clearly like risk management. It consists in identifying:</p>
<ul>
<li>The primary operating functions in the device,</li>
<li>The use scenarios,</li>
<li>The possible use errors.</li>
</ul>
<p>In a first approach, you can answer the questions in ISO 14971 annex C to identify characteristics to safety. If the man software interaction is prone to be a source of critical hazardous situations, more advanced methods may be required.</p>
<p>These data (primary operating functions, use scenarios and possible user errors) are recorded in the usability management file.</p>
<p>For software, the primary operating functions and use scenarios can be modelled with use-case diagrams and descriptions. Note that UML requires that a use-case diagram contains a text description of the use-cases.</p>
<h5>Identifying hazardous phenomena and hazardous situations</h5>
<p>This step consists in identifying the hazardous phenomena and hazardous situations (ditto). This is classical risk assessment. They are identified with data coming from:</p>
<ul>
<li>The use specification,</li>
<li>Data from comparable devices or previous generations of the device,</li>
<li>User errors identified in the previous step.</li>
</ul>
<p>These elements are documented in the risk management file. They can be placed in a section specific to human factors.</p>
<p>Examples of user-related hazardous phenomena and situations:</p>
<ul>
<li>A warning is displayed, the user doesn’t see it,</li>
<li>A value is out of bounds, the user doesn’t see it,</li>
<li>The GUI is ill-formed, the user doesn’t understand it.</li>
</ul>
<h5>Identifying and describing hazard-related use scenarios</h5>
<p>This step is once again risk analysis: the hazardous phenomena, the sequence of events, and the hazards, resulting of the human factors are identified.
These elements are documented in the risk management file accordingly.</p>
<h5>Selecting hazards-related scenarios for summative evaluation</h5>
<p>It is not required to submit all hazard-related scenarios to the summative evaluation. It is possible to select a subset of these scenarios based on objective criteria.</p>
<p>Usually, the criteria is <em>Select hazard-related scenarios where the severity is higher than a given threshold</em> e.g.: Severity ≥ Moderate. You shall write your own criteria in the usability plan.</p>
<p>Said the other way round, it’s not worth including scenarios with low risks in the summative evaluation.</p>
<h5>Identifying mitigation actions and documenting the user interface specification</h5>
<p>The risks related to the use scenarios are then evaluated according the risk management plan (severity, frequency, and possibly detectability if you included that parameter in you risk management plan), and mitigation actions are identified, by following the risk management process.</p>
<p>Identification of mitigation actions can be done either before or during the formative evaluations. It is necessary to confirm the validity of the mitigation actions during the formative evaluations.</p>
<p>The mitigation actions are documented in the user interface specification, in order of priority (see 6.2 of ISO 14971):</p>
<ul>
<li>Changes in user-interface design, including warnings like message boxes,</li>
<li>Training to users,</li>
<li>Information in the accompanying documents: IFU and labeling.</li>
</ul>
<p>For software, the user interface specification can be included in the software requirement specification.</p>
<p>Note that warnings in the graphical user-interface can be seen as design change, and not information to the user. But this shall be verified in the summative evaluation. The warning message shall be relevant enough, placed at the right step in the workflow and change the user’s mindset to avoid an hazardous situation.</p>
<h4>Design and Formative Evaluation</h4>
<p>The formative evaluation is performed during the design phase. You can have one or more formative evaluations. The sequence of formative evaluations in the design project depends on the software being designed. There is no canonical sequence of formative evaluations.
At least one formative evaluation is required, though this could be a bit too short. Two formative evaluations sound like a good fit. The points not identified or discussed in the first evaluation can be treated in the second evaluation.</p>
<p>To be relevant with the design and development project, the formative evaluations should be placed before the last design review. Thus, the design and the user interface are “frozen” after the design review. This doesn’t mean that the formative evaluation happens during the design review.</p>
<p>The formative evaluation can be done with or without the contribution of end-users. The methods of evaluation depend on the context: questionnaires, interviews, presentations of mock-ups, observation of use of prototypes.</p>
<p>For software, the commonly adopted solution is the presentation of mock-ups or prototypes, with end-user proxies (like product managers, biomedical engineers) and end-users who can “play” with the mockups. It is also a good option to let the end-user proxies review the mock-ups to “debug” them, before presenting them to real end-users. So that the presentation doesn’t deviate (too much) with wacky requests from end-users.</p>
<h4>Summative evaluation</h4>
<p>The summative evaluation is performed at the end of the design phase. It can be done after the verification, or during the validation of the device or, if relevant or possible, during clinical assays. It aims at bringing evidence that the risks related to human factors are mitigated.
The position of the summative evaluation depends on the context of your project.</p>
<p>The summative evaluation shall be done with a population of end-users statistically significant for the evaluation. E.g. at least 5 users of each profile defined in the use specification (see <a href="https://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/HumanFactors/ucm119190.htm">FDA guidance documents</a> on Human Factors Engineering). The summative evaluation shall be done for every scenario selected according to criteria defined above (e.g. severity ≥ moderate). The methods of evaluation are left to your choice, depending on the context e.g.: use in simulated environment, use in the target environment.</p>
<p>If the data collected during the summative evaluation don’t allow to conclude on the proper effectiveness of the mitigation actions, or if new risks are identified, you shall either redo the usability engineering process iteratively, or bring rationale on the acceptability of the residual risks individually and on the overall residual risk acceptability.
The rationale can be sought in the risk/benefit ratio on the use of your device.</p>
<p>For software, the solution commonly adopted is free tests performed by selected end-users on a beta version or a release candidate version. The summative evaluation can end with the analysis of a questionnaire filled by the selected end-users.</p>
<p>The user interface of the device is deemed validated when the conclusion of the summative evaluation is positive. You’re done, good job!</p>
<h4>Application to agile methods</h4>
<p>The steps described above can be disseminated in the increments of an agile development process.</p>
<h5>Use specification, primary operating functions</h5>
<p>Use specification and primary operating functions are usually defined in the initialization/inception phase of the project. This phase sets the ground of the software functions and architecture. Likewise, the use specification and primary operating functions are defined during this phase.</p>
<p>Depending on how much you know about the software being developed, the initialization can also be the right time to write the use scenarios. If you already know, say, 80% of the user requirements, you can write the use scenarios and make the risk assessment on these scenarios at the beginning of the project. If you don’t know much on your future software, the use scenarios have to be defined/updated during the iterations.</p>
<h5>Iterations and usability engineering</h5>
<p>The next steps of the usability engineering process are performed during iterations, as shown in the following diagram and explained in the next subsections.
<img src="https://blog.cm-dm.com/public/24-IEC-62366-1/.Agile-iteration-and-iec-62366-1_m.png" alt="Agile-iteration-and-iec-62366-1.png" style="display:table; margin:0 auto;" title="Agile-iteration-and-iec-62366-1.png, May 2018" /></p>
<h5>Use scenarios and hazards mitigation</h5>
<p>The “objects” you manipulate during the iterations are epics and user-stories. They can be seen as use scenarios or small chunks of use scenarios, depending on their size. Thus, you can use them to identify hazards related to user-errors, identify mitigation actions, and update the user-interface specification accordingly.</p>
<p>Depending on the items present in the backlog (eg a brand-new use scenario), it is also possible that you have to update the use specification and the list of primary operating functions, during an iteration.</p>
<h5>Formative evaluations</h5>
<p>Agile methods usually define “personas”, which represent the user-profiles, and are used by the software development team to understand the behavior of the users.</p>
<p>You may base your formative evaluation on the use of these personas. With the role of end-user proxy for the team, the product owner is responsible for the formative evaluation. He/she does the formative evaluation of the user-stories. He/she may invite another person external to the team (or to the company) to participate to the formative evaluation.</p>
<p>You can do the formative evaluation during the demonstration of the software at the end of the iteration. Depending on the results of the formative evaluation, new items related to the user-interface may be added to the backlog and implemented in a further iteration.</p>
<h5>Summative evaluation</h5>
<p>The summative evaluation is placed after the verification phase of the agile software development process. It is performed as described in the “Summative Evaluation” section above.</p>
<p>Incremental summative evaluation may be performed with intermediate releases. I don’t recommend that method. The user-interface is subject to changes in a further intermediate release, invalidating the conclusions of an incremental summative evaluation.</p>
<p>A way to see things is to say that summative evaluation isn’t something agile.</p>
<h4>Conclusion</h4>
<p>I hope you have a better understanding on how to implement IEC 62366-1:2015 in you software development process. Remember that I'm in software above all, human factors engineering isn't my background.<br />
Congratulations and hate comments are welcome!</p>
<p>Edit: Templates<br />
You will also find in <a href="https://blog.cm-dm.com/pages/Software-Development-Process-templates">the templates repository page</a>, two templates useful to generate records of your usability engineering process:</p>
<ul>
<li><a href="https://blog.cm-dm.com/public/Templates/Usability-Engineering-File.docx">Usability Engineering File</a>,</li>
<li><a href="https://blog.cm-dm.com/public/Templates/Usability-Summative-Evaluation.docx">Usability Summative Evaluation Plan and Report</a>.</li>
</ul>
<p><br />
I share this template with the conditions of <a href="https://blog.cm-dm.com/post/2011/11/04/License">CC-BY-NC-ND license</a>.</p>
<br />
<br />
<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/fr/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/fr/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/fr/">Creative Commons Attribution-NonCommercial-NoDerivs 3.0 France License</a>.
https://blog.cm-dm.com/post/2018/07/06/IEC-62366-1-and-Usability-engineering-for-software#comment-formhttps://blog.cm-dm.com/feed/atom/comments/214How to validate software development tools like Jira or Redmine?urn:md5:316c54e7537f697d2e88cce69102955d2016-07-01T13:22:00+02:002016-07-01T13:22:00+02:00MitchProcessesdevelopment processSoftware Validation<p>Following the discussion on <a href="https://blog.cm-dm.com/post/2016/06/10/ISO/TR-80002-2%3A-lastest-news-on-Validation-of-software-for-medical-device-quality-systems">ISO/TR 80002-2 and AAMI TRI 36 in the previous article</a>, here are some tips on how to validate workflow and data management software like Jira or Redmine.</p> <h4>Why validating?</h4>
<p>First point of view, quality-oriented: the purpose of workflow and data management tools for software development is close to document management tools. Both are designed to record information, which is evidence of application of QMS provisions:</p>
<ul>
<li>Document management tools for all document and records,</li>
<li>Software development workflow tools for artifacts produced during software lifecycle activities.</li>
</ul>
<p>Another point of view, more technical: workflow and data management tools for software development are connected to software configuration management (SCM) tools:</p>
<ul>
<li>SCM tools need to be validated, if your SCM tool doesn't work, you could generate a wrong version, and put patients at risk,</li>
<li>Workflow tools need to be validated, if it doesn't work, you could miss a design review, place a version not validated on the market, and and put patients at risk.</li>
</ul>
<h4>When validating</h4>
<p>The probability of such risks is in most cases very low, thus the effort of validation should be proportionate to these risks.<br />
Depending on your context, you could even conclude that the validation is not mandatory. For example:</p>
<ul>
<li>If design reviews are managed outside the tool,</li>
<li>If the software documentation is extracted from the tool in PDF files, which are formally reviewed,</li>
<li>If the tool is only used to manage developers daily tasks, without formal link to software documentation.</li>
</ul>
<p>Anyway, the rationale to validate or not, and the effort of validation shall be recorded in a document (see below risk assessment).</p>
<h4>How validating</h4>
<p>Software workflow management tools are off-the-shelf tools but highly configurable.<br />
The validation can be managed in four steps:</p>
<ul>
<li>The definition of your requirements,</li>
<li>The risk assessment on your software development process,</li>
<li>The qualification of the tool vendor,</li>
<li>The qualification of the tool itself configured for your needs.</li>
</ul>
<h5>Requirements</h5>
<p>First and for all, the user requirements shall guide the validation process.<br />
The best way to do it is to write a statement of work or a software requirement specification document containing the user needs. Such document should be written by software team in collaboration with quality team. It should also be based on your software development process.</p>
<h5>Risk assessment</h5>
<p>Once you've described the requirements, you can do a risk analysis on the software tool and its environment. For example:</p>
<ul>
<li>Risk of software failure of the tool,</li>
<li>Risk of insufficient training of users,</li>
<li>Risk of failure of the vendor.</li>
</ul>
<p>If all identified risks are deemed acceptable you could stop here the validation process. Otherwise you continue the process.</p>
<h5>Qualification of the vendor</h5>
<p>Applying a purchase procedure is probably the most straightforward solution. However the general procedure could lack specific selection and evaluation criteria for that kind of supplier. You can define criteria on:</p>
<ul>
<li>Software tools functions, based on the software requirements,</li>
<li>Tool vendor capabilities to ensure support on the tool when installing, configuring and maintaining the tool.</li>
</ul>
<p>Your purchase department will add criteria on vendor pricing policy :-) If there is only one tool matching your needs (Jira is Jira, you know what I mean) then it's not necessary to spend too much time on the vendor selection.</p>
<h5>IQ/OQ/PQ</h5>
<p>When you have selected the tool and its vendor, it is time to qualify the tool itself.<br />
A very classical IQ/OQ/PQ process is the best way to organize the qualification process. This is not the way it is presented in AAMI TIR 36 (see previous article) but auditors or inspectors are used to see such qualification steps.<br />
For software, all these steps can be gathered in a single software test plan, with:</p>
<ul>
<li>Installation Qualification: tests by inspection that the tool is correctly installed and personnel is trained,</li>
<li>Operational Qualification: tests cases on tools functions, with traceability between test cases and the statement of work,</li>
<li>Performance Qualification: a kind of beta test phase, when software is used during one/two months/years (choose your period).</li>
</ul>
<h4>When revalidating</h4>
<p>The characteristic of such software is its perpetual enhancement to match new user needs. The tool is subject to changes of configuration, minor or major, to adapt the workflow to the real software development process.<br />
You will need to assess the impact of every change of configuration of the workflows implemented in the tool. It may be wise to wait a bit to have a minimum set of changes to implement. Validation of changes can be expensive, with a possible update to all records of validation.</p>
<h4>Conclusion</h4>
<p>Validating tools for the management of software development workflow is not rocket science. It's based on a risk assessment on the process where this tool is used, a good set of requirements, including non-software requirements, and a test plan to verify these requirements in the deployed software version.<br />
<br />
<br />
<br />
If you don't want to start from a white page, see the QMS software validation templates in the <a href="https://blog.cm-dm.com/pages/Software-Development-Process-templates">Template Repository for software</a>: Software Validation Plan, Software Validation Protocol and Software Validation Report.</p>https://blog.cm-dm.com/post/2016/07/01/How-to-validate-software-development-tools-like-Jira-or-Redmine#comment-formhttps://blog.cm-dm.com/feed/atom/comments/192Validation of software used in production and QMS - Part 1 introductionurn:md5:064bfc52c410a0505d2f757c71c3e96d2015-06-19T12:34:00+02:002016-06-07T16:20:26+02:00MitchProcesses<p>Validation of software is an unlimited source of topics!<br />
After discussing <a href="https://blog.cm-dm.com/post/2014/03/13/Validation-of-compiler-and-IDE-Why%2C-when-and-how-to-Part-1">in a previous article the validation of software in development process</a>, let's see how to validate software used in production processes and in the management of QMS documents and records.</p> <p><br />
<em>Edit June 2016: this article remains relevant with the new requirements on software validation found in ISO 13485:2016.</em><br />
<br /></p>
<h4>Why validate?</h4>
<p>Software may hide bugs, it may be misconfigured, it may be misused. For all these reasons, software may give wrong results and should be validated.<br />
The requirements of software validation stem from these practical reasons.</p>
<h5>FDA QSR</h5>
<p>Regarding US regulations, software validation has been required for almost twenty years, namely since June 1, 1997.<br />
The article 21.CFR.820.70(i) reads:</p>
<ul>
<li><em>Automated processes. When computers or automated data processing systems are used as part of production or the quality system, the manufacturer shall validate computer software for its intended use according to an established protocol. All software changes shall be validated before approval and issuance. These validation activities and results shall be documented.</em></li>
</ul>
<p>Thus every software used in production or in the QMS by a manufacturer, shall be validated.<br />
<br />
Likewise, the 21 CFR part 11 adds requirements about the management of electronic records and electronic signatures. If software used by a manufacturer manage such elements, it shall be validated.</p>
<h5>ISO 13485:2003</h5>
<p>ISO 13485 in this actual version requires to validate software used in production processes. Namely, section 7.5.2 <em>Validation of processes for production and service provision</em> of the standard reads:<br /></p>
<ul>
<li><em>The organization shall establish documented procedures for the validation of the application of computer software for production and service provision that affect the ability of the product to conform to specified requirements.</em><br /></li>
</ul>
<p>There is no requirement to validate software used in any other process or in the QMS. The validation scope is very narrow, compared to US regulations.</p>
<h5>ISO 13485:2015 DIS2</h5>
<p>As we've already seen <a href="https://blog.cm-dm.com/post/2015/03/04/ISO-13485-201X-DIS2">in a previous post</a>, the draft of the future version of ISO 13485 always contains the requirements of software validation in section 7.5.2.<br />
And it adds new validation requirements of software used in the QMS. The new requirement in section 4.1.6 reads:</p>
<ul>
<li><em>The organization shall document procedures for the validation of the application of computer software used in the quality management system.</em></li>
</ul>
<p>Thus every software used in a company, which claims ISO 13485 compliance for all of its processes, shall potentially be validated. <br />
ISO 13485:2015 should be released officially in 2016 and harmonized in late 2016 or 2017.<br />
<br />
Thus, when this new version comes into the list of harmonized standards in Europe, companies will have to validate software within the same scope as US regulations.<br />
Only 21 CFR part 11 electronic records & signatures requirements will remain US specific.</p>
<h4>Scope of validation</h4>
<p>The scope of validation doesn't include all software used by a manufacturer. The scope includes:</p>
<ul>
<li>Software tools connected to a production equipment, a control equipment, a measuring equipment:
<ul>
<li>This is the most obvious case hence they are already in the scope of ISO 13485:2003</li>
</ul></li>
<li>Computer-Aided Production Management (CAPM), containing documents and records for production (production routings, inspection plans, production and inspection records...):
<ul>
<li>This case is connected to the previous one and already in the scope of ISO 13485:2003</li>
</ul></li>
<li>QMS Software tool managing documents and records:
<ul>
<li>Document Management tool, like Alfresco,</li>
<li>Excel sheet containing QMS records, with macros or formulas, like a sheet containing CAPA and a macro to compute due date,</li>
</ul></li>
<li>Software tool managing customer complaints, hot-line tickets and requests:
<ul>
<li>Tool to receive complaints by mail and store them in a database, like Request Tracker,</li>
<li>Tool to manage software bugs, like Redmine,</li>
</ul></li>
<li>Software tool managing services delivered to the customer:
<ul>
<li>Tool to give remote access to company documentation for technicians on the field,</li>
<li>Tools to manage service reports,</li>
</ul></li>
<li>Any module of an Entreprise Resource Planning (ERP), dealing with the aspects above,</li>
<li>Any bespoke or home-made software, dealing with the aspects above:
<ul>
<li>Access database,</li>
<li>Website and database developed by a subcontractor,</li>
<li>Any excel sheet with formulas and macros,</li>
</ul></li>
<li>Cherry on the cake, any tool producing/managing electronic records according to 21 CFR part 11:
<ul>
<li>Any software quoted above,</li>
<li>Backup/recovery tools ensuring the safekeeping of electronic records in the timeframe required by regulations.</li>
</ul></li>
</ul>
<p>That's a lot!<br />
Fortunately there are software applications not in the scope of the validation:</p>
<ul>
<li>Financial and administrative software outside the scope of the QMS,</li>
<li>MS office (or any other suite) used for daily paper work, be careful with macros, however,</li>
<li>Mailing system, be careful with mail server agents used to automate processes,</li>
<li>Any software outside the scope of regulations and standards (easy to say but borderline cases are one hell of a question!).</li>
</ul>
<p>Software part of the IT and network infrastructure can be excluded from the scope of the validation, at first sight:</p>
<ul>
<li>Operating systems,</li>
<li>Network tools,</li>
<li>Server tools (virtualization, load balancing ...).</li>
</ul>
<p>But, if there is a risk that a failure of such software can challenge the validation of other software, then they shall be validated.<br />
Hoping you're not too desperate!<br />
Fortunately, there are escape plans. We'll see them in a further post.<br />
<br />
Next time we'll see how to plan this validation, with the <a href="https://blog.cm-dm.com/post/2015/07/24/Validation-of-software-used-in-production-and-QMS-Part-2-Validation-Master-Plan">Validation Master Plan</a>.</p>https://blog.cm-dm.com/post/2015/06/19/Validation-of-software-used-in-production-and-QMS-Part-1-introduction#comment-formhttps://blog.cm-dm.com/feed/atom/comments/149Validation of compiler and IDE - Why, when and how to? - Part 3urn:md5:6b8a256ac49a594913914c072dea1c532014-04-11T13:27:00+02:002016-06-07T16:32:45+02:00MitchProcessesdevelopment processFDAGuidance<p>Coming back to the <a href="https://blog.cm-dm.com/post/2014/03/13/Validation-of-compiler-and-IDE-Why%2C-when-and-how-to-Part-1">discussion about validating compilers and IDE</a>, here are a few more comments I have on this topic.</p> <p>I had the feedback of some readers who told me that it was perceived as mandatory to validate development tools.<br />
No, this is not what I wanted to say!<br />
<br />
<strong>Edit June 2016: this article is obsolete, since the introduction in ISO 13485:2016 of requirements in section 4.1.6 on QMS software validation using a risk-based approach.</strong>
<br /></p>
<h4>Regulatory requirement in production process</h4>
<p>Regulations require validating software involved in processes, but in a limited scope:</p>
<ul>
<li>21.CFR.820.70 (a) requires monitoring and controlling <strong>production processes</strong>, and</li>
<li>21.CFR.820.70 (i) requires validating <em>automated data processing systems</em> and quality system software,</li>
<li>FDA guidance: General Principles of Software Validation provides guidance on the validation of <em>automated process equipment</em> and quality system software,</li>
<li>In Europe, the regulation relies on the ISO 13485 standard, which has section 7.5 about validation of production processes, and their software.</li>
</ul>
<p>Thus, these regulations and guidances are about production processes, <strong>only</strong>.<br />
Edit June 2016: <strong>No</strong>, per ISO 13485:2016.</p>
<h4>No regulatory requirement in development process</h4>
<p>There's no regulatory requirement either in 21.CFR.820, or Medical Devices European Directive, for example, which tells you to validate software development tools.<br />
There's no normative requirement either in IEC 62304, or more generally in ISO 13485.<br />
<br />
Thus, validating tools used in a software development process is <strong>not mandatory</strong>.<br />
Edit June 2016: <strong>Yes</strong>, per ISO 13485:2016, using a risk-based approach.<br />
<br />
<br />
The decision to validate development tools is only based on the conclusions of the risk assessment of the development process.<br />
Edit June 2016: <strong>Yes</strong>, per ISO 13485:2016, using a risk-based approach.<br /></p>https://blog.cm-dm.com/post/2014/04/07/Validation-of-compiler-and-IDE-Why%2C-when-and-how-to-Part-3#comment-formhttps://blog.cm-dm.com/feed/atom/comments/143Validation of compiler and IDE - Why, when and how to? - Part 2: compilersurn:md5:9e20f503775c8d9323e2a94613f0314f2014-03-28T12:50:00+01:002016-06-07T16:26:38+02:00MitchProcessescritical softwaredevelopment processrisk managementsoftware failure<p>We saw <a href="https://blog.cm-dm.com/post/2014/03/13/Validation-of-compiler-and-IDE-Why%2C-when-and-how-to-Part-1">in the last post how to validate a software development tool</a>. But we saw also that validating a compiler this way is not a satisfactory task.<br />
Then: Why, when, and how to validate a compiler?</p> <h4>Why?</h4>
<p>A compiler is an assembly of finite state machines, which transform programming language into assembly code or P-code. Given their complexity, there are good chances that some bugs remain in compilers.<br />
Want to se a list of opened bugs on a compiler? Have a look at <a href="http://gcc.gnu.org/bugzilla/buglist.cgi?component=c%2B%2B&product=gcc&resolution=---">gcc bugs in C++</a> or <a href="http://gcc.gnu.org/bugzilla/buglist.cgi?component=c&product=gcc&resolution=---">bugs in C</a>!<br />
<br />
Looking at these lists, are you convinced that bugs remain in compilers? And why a compiler should be validated?<br />
Fortunately, most of these bugs arise when advanced language features are used! And fortunately, the effect of most of these bugs is that the code won't compile.<br />
This is why coding rules are important for critical software. Using simplified coding rules (eg: no use of advanced language features) is the best way to avoid compiler bugs.</p>
<h4>When?</h4>
<p>Now that we understand that a compiler should be validated, when should we do it?<br />
When compiler bug may generate an unacceptable risk for the patient or the operator using the compiled software.<br />
<br />
Examples (dummy, as usual): There's a bug in the rounding of floating point variables under certain circumstances, and at runtime output values are randomly inconsistent. In which context is it unacceptable:</p>
<ul>
<li>The compiled software is a PACS viewer: displayed images are very rarely affected by the bug, with a few pixels in wrong color. The practician will see that occasionally some pixels are inconsistent. Negligible risk for the patient (the manufacturer didn't even bother to fix the bug!).</li>
<li>The compiled software is in a perfusion pump: the computed volumes are inconsistent from time to time. The manufacturer will discover the bug in software tests (the software is class C, it has hundreds of unit tests). And in the very unlikely case that the bug is not found during tests, if it arises in real conditions, there is still a watchdog protection which rips off too high volume values.</li>
<li>The compiled software is in a pacemaker! The computed energy quantity in an electric shock is inconsistent from time to time. Accuracy in the energy quantity is absolutely vital. Oops! I think I'm going to validate my compiler!</li>
</ul>
<h4>How?</h4>
<p>Validating a compiler is not an easy task.<br />
Given its complexity, only formal validation is possible, namely by mathematical demonstration.<br />
There's a team at INRIA who works on this infinite task and they created <a href="http://compcert.inria.fr">a compiler named CompCert</a>. CompCert is the result of their research in formal validation.<br />
<br />
Xavier Leroy, researcher at INRIA, presented the results of his team about C compiler validation at a symposium dedicated to code generation in 2011. Here is his presentation: <a href="http://www.cgo.org/cgo2011/Xavier_Leroy.pdf">http://www.cgo.org/cgo2011/Xavier_Leroy.pdf</a><br />
His presentation looks quite readable at the beginning but it becomes quickly very difficult to follow. Just have a look at this presentation to see the complexity of formal compiler validation!<br />
<br />
There are also commercial compiler validation suites which contain thousands of tests cases to verify compilers compliance to C language standards. These products don't provide a formal validation, like CompCert. But they reduce the probability of error in a compiler to an extremely low probability.<br />
But they are extremely expensive. Because they contain a huge history of tests added day by day by their manufacturers.<br />
<br />
That's why unless working on very critical medical device, it's better to spend time testing the software than the chain of development tools. If there is a bug in the compiler then the generated code will be buggey as well.<br />
<br />
<br />
By the way: why shouldn't we validate processors as well?<br />
Remember the fdiv error in Intel pentium 4 instruction set. :-)<br />
<br />
<br />
See also <a href="https://blog.cm-dm.com/post/2014/04/07/Validation-of-compiler-and-IDE-Why%2C-when-and-how-to-Part-3">the next article</a>, with additional comments on this topic.</p>https://blog.cm-dm.com/post/2014/03/28/validation-of-compiler-and-IDE-Why%2C-when-and-how-to-Part-2#comment-formhttps://blog.cm-dm.com/feed/atom/comments/141Validation of compiler and IDE - Why, when and how to? - Part 1urn:md5:6a2b7650f462a41be688fdf8e0edf0f52014-03-14T13:26:00+01:002016-06-07T16:24:36+02:00MitchProcessescritical softwaredevelopment processFDAIEC 62304risk managementSoftware ValidationSoftware VerificationSOUP<p>Validating the compiler used in software development is a recurring issue. To what extent a compiler should be validated, when, how and why?<br />
In the same vein, we can extend the question of validation to all tools used in the software development environment: integrated development environment, configuration management tools, compiler (and linker), automated test tools.</p> <p><br />
<em>Edit June 2016: this article remains relevant with the new requirements on software validation found in ISO 13485:2016.</em><br />
<br /></p>
<h4>Class of medical device</h4>
<p>If you're in class III FDA or in class III CE mark or in Class C IEC 62304, you have to do it thoroughly if a flaw in a development tool represents an unacceptable risk!<br />
<br />
If you're in class II FDA or in class IIa or IIb CE mark or in class B IEC 62304, you may do it but it's far from being mandatory!<br />
<br />
If you're in class I FDA or in Class I (even class IIa) CE mark or in class A IEC 62304, do it if you have spare time!<br />
<br />
In other words, thorough development tool validation, and especially compiler validation, are only relevant for very, very, very, critical software.<br />
<br />
<em>Edit June 2016: following the risk-based approach found in ISO 13485:2016, this rationale remains relevant.</em>
<br />
Perhaps it makes sense for a small subset of embedded sw used in class III MD, like pacemakers. Likewise it makes senses in automotive or airborne systems where sw failure equals dozens of casualties.</p>
<h5>Development tools are low risk</h5>
<p>The main rationale not to validate development tools is to consider them as low-risk software. Hence if there is a bug in one of these tools, then the software built will be buggey and odds are pretty good that this bug will be discovered during software tests (be it unit, integration, or functional).<br /></p>
<h4>Examples</h4>
<p>Here are some examples of bugs in development tools:</p>
<ul>
<li>My IDE has a bug in the code editor and doesn't save source files in specific conditions. I'm going to see it quickly! Or I'm going to see it in the code of a colleague during a code review.</li>
<li>My source control tool has a bug in the graphical merge function. I'm going to see it quickly as well!</li>
<li>My compiler doesn't cast the right way a floating point value to an integer value, under certain circumstances. I'm not going to see it quickly. But I'm probably going to see it during tests, with inconsistent computed values.</li>
</ul>
<p>All in all, tests in the software development process are here to find problems created in early stages of the process. Most of these problems are created by humans (we can't think of everything), and some are created by the tools we use (the guys who created the tools couldn't think of everything).</p>
<h5>Process risk assessment</h5>
<p>What is shown above is assessing the risks of the software development process.<br />
In class I, there is no use to validate thoroughly development tools hence the hazardous situations created by these tools have low severity (the final software is class A of IEC 62304) or have low probability (bugs created by these tools are fixed during code reviews or tests).<br />
In class II or III, it's useful to validate these tools hence the hazardous situations created by these tools (namely bug in built software) have high severity, or high probability (think of the <a href="https://blog.cm-dm.com/post/2012/09/14/Probability-of-occurence-of-a-software-failure">100% probability in software hazards</a>).</p>
<h4>How to validate these tools</h4>
<p>If you have to validate these tools, you may take examples from this guidance: AAMI TIR 36 Validation of software for regulated processes. It has pretty good examples (excepted compilers, see below).<br />
You may also get your inspiration from GAMP5 about computerized systems (pull out your credit card if you want to read it!).<br />
<br />
If you don't want to buy any of these documents, there are plenty of examples available on the internet. You just have to seek for IQ/OQ/PQ plans and reports.<br />
Basically the goal of a validation plan is a bit like applying the software development process of IEC 62304 with SOUP only:</p>
<ul>
<li>Assessing risks of the software development process ,</li>
<li>Writing requirements of the ideal development tool (including requirements mitigating risks and requirements about the tool vendor),</li>
<li>No architecture or detailed design, but in place selecting the right tool for your needs (don't forget interoperability with other tools),</li>
<li>Tests with three levels:
<ul>
<li>Installation Qualification (IQ), i.e. ensuring that it is deployed and well configured on the development or integration platform, and verifying that all necessary documentation is available,</li>
<li>Operational Qualification (OQ), i.e. verifying that it works and integrates well with other development tools, according to written and approved requirements (including requirements mitigating risks),</li>
<li>Performance Qualification (PQ), i.e. using the development tools in real conditions for a period of time to ensure that the tool and its vendor behave according to expected performances.</li>
</ul></li>
</ul>
<p><br />
If you are in a case where there's no urge to validate software development tools, then just write a document with the rationales that led you to choose these tools.<br />
In all cases, however, it's necessary to have a maintenance plan of the development tools, like what you have about SOUPs in IEC 62304:</p>
<ul>
<li>Monitoring published bugs, bugs fixes and new versions,</li>
<li>Assessing risks related to these bugs,</li>
<li>Deciding whether it's necessary to install a new version of the development tool.</li>
</ul>
<h4>Development vs production processes</h4>
<p>There are dozens of articles or memos or documents about validation of tools used in production processes.<br />
The validation method described above is a bit peculiar because it deals with tools used in software development processes (for software design), not production processes (for production of physical goods or for delivery of standardized services).<br />
Thus it is acceptable only for development processes. For production processes, like automated machines or test benches, this validation plan is too simple.<br />
<br />
However,if you want to validate a compiler, this validation plan is a bit incomplete.<br />
Validating a compiler once and for all is a titanic task! We'll see it<a href="https://blog.cm-dm.com/post/2014/03/28/validation-of-compiler-and-IDE-Why%2C-when-and-how-to-Part-2"> in the next article</a>.</p>https://blog.cm-dm.com/post/2014/03/13/Validation-of-compiler-and-IDE-Why%2C-when-and-how-to-Part-1#comment-formhttps://blog.cm-dm.com/feed/atom/comments/140Goto Failurn:md5:937e2bea60744c87e3bbc20ab23faaf22014-02-28T13:49:00+01:002014-03-01T17:55:40+01:00MitchProcessescritical softwaredevelopment processIEC 62304software failureSoftware Verification<p>If you've haven't heard about Apple's security flaw registered as <a href="http://support.apple.com/kb/HT6150">CVE-2014-1266 on apple website</a>, you probably were on planet Mars.<br />
Basically, it was unsafe to use https connections. I couldn't help but write an article about this!<br />
Components dealing with secured connections are abolutely critical. Applying rigorous development process is the best chance to avoid any trouble with these components.</p> <h4>The guilty code</h4>
<p>Here is the code with the security flaw in Apple's ssl library:</p>
<p style="white-space: pre;">
static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,
uint8_t *signature, UInt16 signatureLen)
{
OSStatus err;
<i>...</i>
if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
goto fail;
<span style="color: #FF0000;">goto fail;</span>
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
goto fail;
<i>...</i>
fail:
SSLFreeBuffer(&signedHashes);
SSLFreeBuffer(&hashCtx);
return err;
}</p>
<p>Quote from <a href="https://www.imperialviolet.org/2014/02/22/applebug.html">Adam Langley's ImperialViolet blog</a>, who quoted it from <a href="http://opensource.apple.com/source/Security/Security-55471/libsecurity_ssl/lib/sslKeyExchange.c">Apple's published source code</a>.</p>
<h4>How to reduce the probability of such flaw?</h4>
<p>Even is it's tempting to blame the developer on its code, that's not the right way to avoid such situation to happen again. It's the development process as a whole that is questioned here.<br />
Namely in every step of the process. Considering that collecting user requirements, writing specifications and designing architecture are not called into question (this is ssl, look at the rfc and stuff, ok?), the situation can be avoided by putting safeguards during coding and testing.<br /></p>
<h4>Testing</h4>
<p>Tests are a good way to find bugs, not all of them but most of them.<br />
There are plenty of ways to test software (see "the big picture" in <a href="https://blog.cm-dm.com/post/2012/12/13/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-3">this article</a> about tests): unit tests, user tests and so on.<br />
<br />
According to <a href="https://www.imperialviolet.org/2014/02/22/applebug.html">Adam's article</a>, the problem could only be discovered by doing very specific tests with a custom-made TLS stack.<br />
Needless to say that such tool would be itself subject to errors. How to test a complex test tool?. It would need <a href="http://en.wikipedia.org/wiki/Verification_and_validation">IQ, OQ an PQ</a>, to be qualified as the right testing tool. Or, for ISO 13485 aficionados, a tool validation protocol according to section 7.5.2.1 of that ISO standard.<br />
<br />
Thus it appears that tests are probably not enough to run after this kind of bug.<br /></p>
<h4>Coding</h4>
<p>We only have the coding phase left!<br />
There are two possiblities, to trap this kind of bug:</p>
<ul>
<li>Verifying what developers code,</li>
<li>Changing the way developers code.</li>
</ul>
<h5>Verifying code</h5>
<p>Here we have two possibilities:</p>
<ul>
<li>Human verification,</li>
<li>Automated verification.</li>
</ul>
<p>Human verification is achieved by doing peer code reviews, whereas automated is done with the help of static analysis tools or advanced compiler checks.<br />
Both have their advantages or drawbacks (dealing with humans or dealing with machines designed by humans).<br />
<br />
BTW: these kinds of verification are in line with IEC 62304 requirements about software units verification found in sections 5.5.2, 5.5.3 and 5.5.4 of the standard.<br />
So if you want to be in line with IEC 62304 in class B (and C for 5.5.4) I can only urge you to either implement unit tests and/or plan coding reviews.<br />
<br />
If you're skeptical about the benefits of code reviews, I invite you to read this <a href="https://blog.cm-dm.com/post/2013/10/27/Testing-is-overrated">previous article about code reviews vs tests</a>.</p>
<h5>Changing the way developers code</h5>
<p>Here we have two levers:</p>
<ul>
<li>Pair programming,</li>
<li>Coding standards.</li>
</ul>
<p>I can only urge you to try to impose coding standards and try pair programming with your development teams!<br />
IMHO this is the most efficient way to avoid such bug.<br />
<br />
Coding standards however are more efficient if there is a static analysis tool to verify that they are applied. They're usually a document with more than 10-20 rules and they're difficult to know by heart!<br />
<br />
Pair programming is not easy to implement, especially when managers see it as doubling costs! That's the biggest obstacle to this method!</p>
<h4>For the hell of a goto</h4>
<p>Another remark: strongly ban goto's in your coding practices.<br />
If I were to configure a C compiler on a build server, a goto wouldn't compile.<br />
Just for the hell of it! :-)</p>
<h4>Conclusion</h4>
<p>The best way to put the odds on our side is to combine several methods:</p>
<ul>
<li>Peer coding reviews and/or pair coding,</li>
<li>Coding conventions,</li>
<li>Static code analysis,</li>
<li>Classical tests.</li>
</ul>
<p>You will be totally in line with sections 5.5, 5.6 and 5.7 of IEC 62304.<br />
<br />
The stricter the development process, the less bugs. Hence the software security classes.</p>https://blog.cm-dm.com/post/2014/02/28/Goto-Fail#comment-formhttps://blog.cm-dm.com/feed/atom/comments/139Testing is overratedurn:md5:a70a47f0ff60c31e3f203e58b69004902013-10-25T19:05:00+02:002013-10-27T19:06:54+01:00MitchProcessesSoftware Verification<p>Software testing is the keystone of bugs discovery. Most of software engineers and project managers think this assertion is true!</p> <p>But contrary examples exist:<br />
Have a look at this blog article: <a href="http://railspikes.com/2008/7/11/testing-is-overrated">Testing is overrated</a>. Whereas software testing is truly a good way to find bugs, there are other ways to catch them: coding rules, peer reviews and the like.<br />
A very good blog article!</p>https://blog.cm-dm.com/post/2013/10/27/Testing-is-overrated#comment-formhttps://blog.cm-dm.com/feed/atom/comments/136How to validate a software medical device running on web browsers?urn:md5:c190d0a769a68342bd9aa3880e44df1f2013-10-11T14:08:00+02:002013-10-11T14:08:00+02:00MitchProcessesdevelopment processIEC 62304mobile medical appSoftware VerificationSOUP<p>Your company develops medical web apps (HTML/JS, HTML5 or any other client-side technology) and your customers would like them to run on every web browser.<br />
<br />
Web browsers are SOUP, according to IEC 62304. In case of Chrome and Firefox there are dozens of versions...<br />
<br />
Does it mean that software has to be tested - and documented - with every single browser and every single version of the browser?<br />
That's a nightmare!</p> <p><strong>It's simply not possible to test every combination of browsers versions</strong>, all the more combined with OS versions.<br />
<br />
There are roughly 3 solutions that can make things more simple:</p>
<h4>1. Assess what software class is the code on the browser side</h4>
<p>Is it possible to have it in class A, while the server side is in class B or C? If so, it would decrease the burden. No tests requirements for class A in IEC 62304 standard and less risks management, even if tests are still required at medical device system level to verify and validate it.</p>
<h4>2. Reduce the number of supported versions of OS and browsers</h4>
<p>Eg: Win 7 or higher, Mac 10.6 or higher, IE 9.x or higher, Safari 5.x or higher, Firefox 14.x or higher, with :</p>
<ul>
<li>tests made on the combinations of OS x Browsers in their lowest accepted versions.</li>
<li>a quality procedure of surveillance of all new versions of SOUP,</li>
<li>and an assessment of the release notes or change logs to ensure that they are still compatible with your web site.</li>
</ul>
<p>Provided that release notes and change logs are available on time.</p>
<h4>3. Verify on the client side on which browser+OS the code is running</h4>
<p>If this is an unsupported version, display an error to the user and a link to a page with supported versions + hotline number.
<br />
<br />
There might be other solutions. But none of them is magic.<br />
There is no golden rule, we are constrained by 8.1.2 of IEC 62304 standard, which requires to identify the version of SOUP + various paragraphs in chapter 7, especially 7.1.3 and 7.4.2, which require to assess risks related to SOUP.<br />
<br /></p>
<h4>Automated testing?</h4>
<p>Tools like Browserstack or Selenium can automate the testing. So it's possible to test a lot of versions in a short time. But they're time-consuming. It's a lot of energy spent to test every new browser version when versions are released almost every week.<br />
<br />
Automated testing doesn't change the root cause of the problem - floating configurations. Thus the idea of limiting the number of supported versions.</p>https://blog.cm-dm.com/post/2013/10/04/Web-browsers-are-SOUP-#comment-formhttps://blog.cm-dm.com/feed/atom/comments/135En route to Software Verification: one goal, many methods - part 3urn:md5:afdc902ef62c854c4943df8d075da5f22012-12-14T12:35:00+01:002012-12-14T12:35:00+01:00MitchProcessesSoftware ValidationSoftware Verification<p>In <a href="https://blog.cm-dm.com/post/2012/12/07/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-2">my last post</a>, I explained the benefits of static analysis. This software verification method is mainly relevant to find bugs in mission critical software. But it fits the need of bug-free software for less critical software as well.<br />
Static analysis can be seen as an achievement in the implementation of software verification methods. Yet, other methods exists that fit very specific purposes.</p> <h4>Heavy Metal</h4>
<p>In reference to my previous posts, I place the tests methods here in the <em>Heavy Metal</em> category!<br />
Perhaps some may say these methods aren't so heavy metal. It depends on your experience with these methods, of course.</p>
<h5>Automated GUI testing</h5>
<p>Automated Graphical User Interface testing sounds like something very simple, but it is not. It requires a GUI testing engine and a lot of patience of people who feed the engine with tests.<br />
The main impact of GUI testing is on the composition of the test team. The GUI tests will certainly be assigned to a (poor) guy, who will work 100% on this task. A good option is to let him/her do other types of testing and integration!<br />
I personally haven't been satisfied yet by any GUI testing tool I've found either open-source, or commercial. If you can quote me one that you're happy with, you're welcome!</p>
<h5>Performance testing</h5>
<p>Testing performance is less specific than testing GUI because it involves the whole architecture of a product, not only its GUI. As a consequence it involves the whole testing team (this is my experience, maybe not yours).<br />
Here again, a lot of tools exist to do performance tests. Most of them are focused on Web apps or databases with concerns about load balancing and load increase.<br />
This is probably not relevant for embarked software. Tailor-made testing programs are better to test a specific issue on such software.</p>
<h5>Security testing</h5>
<p>Static analysis tools can find security holes in your code, like buffer overrun. Some other security tests are possibly necessary for your devices. Perhaps it's a good idea to ask consultants in IT security to find security holes in your devices.<br />
Embarked software with wireless connection (even used very occasionally, like only in maintenance) fall into the scope of this kind of tests.</p>
<h5>Statistical methods</h5>
<p>Statistical methods are made to test complex algorithms. It is not possible to test all combinations of input values in complex algorithms. One way to increase the level of confidence in an algorithm is to use such methods.<br />
Most of algorithms are based on physics/maths laws and can be tested to ensure that the law is verified by the algorithm.<br />
Statistical tests call for techniques like Monte Carlo simulations or khi 2 tests, to name a few. Although I don't have experience in statistics, I had once to use a Monte Carlo simulation engine. It was a commercial add-on to excel, which was really nice to use.</p>
<h4>The Big Picture</h4>
<p>To finish this series of posts, here is a diagram, which contains the position of the different verification methods we've seen. I tried to place them according to their complexity and the type of bugs found.<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-08_m.jpg" alt="Software Medical Devices - Position of different verification methods vs their complexity and the scope of bugs found" style="display:block; margin:0 auto;" title="Software Medical Devices - Position of different verification methods vs their complexity and the scope of bugs found, Dec 2012" /><br />
You may certainly change the size and the position of the ellipses, given your experience and the kind of medical devices your work on. And I don't include clinical trials in end-user tests done for software verification, as discussed in <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-validation">this article.</a><br />
<br />
The most important point on this diagram is the projection of the ellipses on the x-axis. The union of the projections of all ellipses shall cover all the x-axis. i.e. all types of bugs shall be sought by these methods:</p>
<ul>
<li>High-level: uses-cases and architecture,</li>
<li>Mid-level: algorithms and components</li>
<li>Low-level: language pitfalls, coding rules and software units.</li>
</ul>
<p>We have a zone at both extremities of the axis (grey-shadowed) where only one method is able to find bugs efficiently:</p>
<ul>
<li>For very high-level bugs, only end-users can find them,</li>
<li>For very low-level bugs, only static analysis can find them.</li>
</ul>
<p>When these kind of bugs are found on the field, after the software validation:</p>
<ul>
<li>Very high-level bugs are discrepancies between the result given by software and the result expected by the user, that require medical knowledge to be found and analyzed,</li>
<li>Very low-level bug are any kind of error in code, that can lead to an erratic and non-reproductible behavior or simply a crash.</li>
</ul>
<p>Only a fully controlled software development process is able to whip out all bugs including those at the extremities of the diagram.</p>
<h4>He forgot code reviews!</h4>
<p>Some may say that I didn't mention code reviews by peers or code inspections (like Fagan analysis, as mentioned in a comment of <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-verification">this post</a>). This is a kind of verification method, but no <em>alive</em> software is involved:</p>
<ul>
<li>neither the software is running,</li>
<li>nor a test tool is running,</li>
<li>only developers can do it.</li>
</ul>
<p>People only need a sheet of paper or a text editor to do code reviews. Code reviews are made by developers, not testers. That's why I didn't put them in the scope of this series.<br />
One could argue that unit tests are made by developers. Yes, it's true. But units tests may be run by a tester or a build manager, without the help of any developer.<br />
That's why I don't include code reviews in software verification, as well as requirements reviews or achitecture reviews, for example.</p>
<h4>Conclusion</h4>
<p>I tried to make an overview of software verification methods that exist and how complementary they are. There are probably other methods that are used by software companies or computer science labs. But I think that I have covered 90%-95% of all methods. If you know some others, feel free to quote them in comments!<br /></p>https://blog.cm-dm.com/post/2012/12/13/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-3#comment-formhttps://blog.cm-dm.com/feed/atom/comments/75En route to Software Verification: one goal, many methods - part 2urn:md5:04454fb9e3c32586ebca4b457b92e0812012-12-07T12:45:00+01:002012-12-17T12:02:23+01:00MitchProcessescritical softwareSoftware ValidationSoftware Verification<p>In my <a href="https://blog.cm-dm.com/post/2012/11/30/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-1">last article</a>, I talked about the most classical methods used to verify software: human testing (driven by test cases or not) and unit tests. I was about to talk about static analysis, that I place at a higher level of complexity in the list of verification methods, but I have to say a bit more about unit tests.</p> <h4>Discussion about unit tests vs verification</h4>
<p>The main characteristic of units tests is that they change the way developers work:</p>
<ul>
<li>Good unit tests are written before coding,</li>
<li>Less good unit tests are written after coding.</li>
</ul>
<p>I can't write that unit tests written after coding are bad. It's always good to write unit tests. Just that they are less good if written a posteriori or ... during a phase of reverse-engineering (yes, it happens. Don't blame teams who work like that. You never know...).<br />
However, there is a case where unit tests shall be written a posteriori. It's when a bug is found and fixed. The unit test is written along with the bug fix to maximize the odds of avoiding regressions in future versions.<br /></p>
<h5>Units tests = coding</h5>
<p>A good implementation of unit tests requires changing the way developers design and code. First you think about the function, then you write the test, and finally you write the code.<br />
So this is not so obvious to do unit tests the canonical way. Developers need training to be efficient at writing unit tests, and -most of all- should manifest their willingness to do so.<br />
A new agile development method was even created: the <a href="http://en.wikipedia.org/wiki/Test-driven_development">Test Driven Development</a>. It makes an intensive use of very early unit tests combined with agile development in short loops.</p>
<h5>Unit tests = verifying</h5>
<p>Unit tests are a tool to verify that software runs the way it was designed. So they are definitely a part of methods used during verification.<br />
But they happen very early in the development process, before the "true" verification phase. If they are not well implemented during coding phases, it's extremely difficult, time consuming and expensive to write them during the verification phase.<br />
Classical software tests cases may be modified or completed during the verification phase, if they were not well prepared enough. But this is hardly possible with unit tests, because during verification developers spend all their time to fix bugs.<br />
They don't have time to add more unit tests on components where bugs don't show-up. It's too late.</p>
<h5>Unit tests rock</h5>
<p>Unit tests are a very powerful tool. But writing unit test and coding are intricated (should I write intertwined?) activities. On top of consequences, they need some changes in the habits of the software development team.<br />
That's why unit tests are not jazzy but definitely rock.</p>
<h4>Rock</h4>
<p>Let's continue with static analysis, another very powerful tool. Contrary to unit tests, there's little to do from the point of view of the developer to run static analysis. Most tools are run at build time and the main effort is to interpret the report generated by the tool.</p>
<h5>Checking the code</h5>
<p>Static analysis can be seen as code checks and unit tests that other developers have thought about and done for you. The main advantage of static analysis is its ability to scan all the code to find issues and report them.<br />
It's the best way to whip out the most basic bugs found in C language like uninitialized memory, or null pointers and so on. Open-source static analysis engines can do these types of basic checks:</p>
<ul>
<li>Basic errors like those mentioned above,</li>
<li>Programming rules,</li>
<li>and also code qualimetry (dead code, length of methods ...).</li>
</ul>
<p>I say that these checks are basic, but writing the engines that do these checks is absolutely not a basic code! Since all are based on syntactic and semantic analysis, it's as complex as writing a compiler or an interpreter. That's why some people consider that these checks should be present in the compilers.<br />
Some compilers actually do it because the language specs already contain rules that make these checks mandatory, like ADA or C# or MISRA C.<br />
So the frontier between an advanced compiler and a static analysis engine is sometimes blurred.</p>
<h5>Avanced code inspection</h5>
<p>The most advanced static analysis engines can go far beyond these basic checks. For example finding conditions when:</p>
<ul>
<li>a division by zero or arithmetic overflow occurs,</li>
<li>a loop doesn't exit,</li>
<li>a database deadlock happens.</li>
</ul>
<p>There are as many possibilities as there are programs.<br />
With such tools, the advantage is to cover cases you haven't thought. The drawback is that some lazy people might lay on the tool to find bugs.<br />
Another drawback of such tools is that you can't rely on just one of them. Each tool has its own checks that another one hasn't. A lot of checks overlap but there are always grey areas.<br />
It's definitely true that using one is going to decrease the number of hidden bugs in code. But, theoretically, it would be better to use more than one, which probably is not a practically feasible solution.<br /></p>
<h5>Which tools</h5>
<p>A lot of free and open-source tools exist, with more or less efficiency. They all do basic checks and more or less advanced checks. Here is an partial and non exhaustive list of tools:</p>
<ul>
<li>For C/C++: <a href="http://www.dwheeler.com/flawfinder/">FlawFinder</a>, <a href="http://en.wikipedia.org/wiki/Cppcheck">CppCheck</a>, <a href="http://www.security-database.com/toolswatch/RATS-v2-3-Rough-Auditing-Tool-for.html">RATS</a>,</li>
<li>For Java: <a href="http://checkstyle.sourceforge.net">CheckStyle</a>, <a href="http://www.sonarsource.org">Sonar</a>, <a href="http://findbugs.sourceforge.net">FindBugs</a>.</li>
</ul>
<p>All commercial tools available on the market do both types of checks: basic ones and (more or less) advanced ones. I usually don't quote commercial software in my blog but I can make an exception for <a href="http://en.wikipedia.org/wiki/Polyspace">Polyspace</a>. It was created after the crash of Ariane 5 european rocket. The bug in the calculator responsible for the crash was known as "impossible to find with automated tests". Now it is, thanks to Polyspace!</p>
<h5>Static analysis and regulatory requirements</h5>
<p>It is definitely a good idea to implement static analysis for devices with mid to high-level of risks, namely class C according to IEC 62304 standard. And optionally for class B software. This is my opinion, there's no requirement in the standard that tells you to do static analysis.<br />
There is no regulatory requirement that make static analysis mandatory for mission critical software either. But it's better to have them in your software development process. As far as I know, there is no FDA guidance that quotes static analysis (except a small line in the guidance about <a href="http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm089543.htm">software in a 510k</a>).<br />
Researchers at FDA <a href="http://www.embedded.com/design/prototyping-and-development/4007539/Using-static-analysis-to-evaluate-software-in-medical-devices">published an article</a> a few years ago about the benefits of static analysis. This proves the interest of the FDA to static analysis for specific cases quoted in this article.<br />
On the side of CE Mark, I found no data about static analysis. So use this method based on your own assessment of your situation!<br />
<br />
<br />
I could say a lot more about these methods, like problems with false positives and false negatives, how to interpret static analysis logs. There are dozen of fantastic articles on the web. Static analysis is a vivid and exciting subject. (when I say exciting, it's true for you if you have kept alive the little geek inside you!).
<br />
<br />
There are even more complex software verification methods. I'll talk about those <a href="https://blog.cm-dm.com/post/2012/12/13/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-3">in the last article of this series</a> en route to software verification!</p>https://blog.cm-dm.com/post/2012/12/07/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-2#comment-formhttps://blog.cm-dm.com/feed/atom/comments/74En route to Software Verification: one goal, many methods - part 1urn:md5:3a0a1d106c1a59c50e6eea568a0dfd262012-11-30T12:06:00+01:002012-12-12T19:02:49+01:00MitchProcessesdevelopment processSoftware ValidationSoftware Verification<p>Software verification is easy to define: to demonstrate that software works as it was specified (and without bugs!). But there's not a unique way to do it.<br />
Let's see what methods we have in hands to verify software.</p> <h4>Classic</h4>
<p>Ask someone how to verify software and he/she will answer you to put a guy in front of the guilty machine with a dozen of testing procedures.<br />
This is actually the most basic way to test software. Doing this, you are sure that:</p>
<ul>
<li>not all cases will be covered (even with hundreds of testing procedures),</li>
<li>the time initially reserved to tests in the planning won't be enough,</li>
<li>the final software will be full of bugs.</li>
</ul>
<p>Yet, most of critical bugs should have been whipped out but some bugs could still be hidden somewhere. That's why it's necessary to use other methods to test software.<br />
<br />
The second most basic way to test software is to give it to a few selected end-users, after the first phase of tests described above.<br />
Doing this, you are sure that:</p>
<ul>
<li>not all cases will be covered (but less that before),</li>
<li>the time reserved to tests by the end-user won't be long (he/she is probably a physician and has lots of others things to do),</li>
<li>the final software will be full of bugs (but less than before).</li>
</ul>
<p>The second bullet point is false if you pay selected end-users to do tests, or if you're lucky to have a passionated end-user.<br />
Nevertheless there will still be remaining bugs in software, most probably:</p>
<ul>
<li>all critical bugs are whipped out,</li>
<li>some major bugs could still reside somewhere,</li>
<li>there are dozens of minor bugs.</li>
</ul>
<p>But it works so far. And you don't have enough time to fix anything else, so the device is placed on the market as is.<br />
This is true for devices with low level of risks, namely class A according to IEC 62304 standard or possibly low classes according to national regulations.<br />
Testing all possible cases is not possible within the timeframe and budget of most software development. There are always remaining bugs that are found when the device is already placed on the market. This is not a big deal as long as the remaining bugs don't impair the risk-benefit assessment of the device.</p>
<h4>Jazz</h4>
<p>Most of bugs are triggered by small errors of consistency in code. For example, not testing if an input parameter is inside a given range of values before starting computations. <a href="http://en.wikipedia.org/wiki/Unit_testing">Unit tests</a> is a method to ensure that these tiny inconsistencies are detected and fixed.<br />
When they became popular a few years ago, unit tests seemed to be a miraculous method to kill bugs in the egg (I personally was an aficionado on unit tests!).
But this method is not so miraculous and has its own pitfalls.<br />
By doing this, your are sure that:</p>
<ul>
<li>the time reserved to code unit tests doesn't fit into the planning,</li>
<li>some critical bugs and major bugs are whipped out.</li>
<li>not all cases of inconsistency are covered (but less than without unit test),</li>
</ul>
<p>The problem with unit tests is that the software developer is at the center of the process:</p>
<ul>
<li>he/she has to decide which tests to write,</li>
<li>he/she could write an erroneous unit test (do we need a unit test to test the unit test ???),</li>
<li>he/she hasn't got the time to write it, eventually.</li>
</ul>
<p>So, unit tests bring a higher level of confidence that the final software has less hidden bugs.<br />
Compared to classic methods, they whip out small inconsistencies that can lead to critical bugs by a chain of events in the algorithms running in software.<br />
They're really complementary to the classic methods. Classic methods are more prone to capture bugs in use cases or to capture bad behaviors compared to high level requirements. Units tests are more prone to capture bugs in workflows or in algorithms at a lower level.<br />
It is definitely a good idea to implement unit tests for devices with mid-level of risks, namely class B according to IEC 62304 standard!
<br /></p>
<h5>Note</h5>
<p>Some languages have their own mandatory units test inside the language specification, like the pioneer <a href="http://en.wikipedia.org/wiki/Eiffel_(language)">Eiffel</a> language. Eiffel requires design by contract. Basically, it requires that each input parameter and each output parameter is tested against a rule implemented in the code of the procedure/method. But Eiffel has always been confidential. And design by contract has only really popped-out with the latest versions of languages like .NET 2.0 C# and Ada 2012.
<br />
<br />
<br />
Going higher in complexity, we'll see next time the static analysis. Another method, which deserves its <a href="https://blog.cm-dm.com/post/2012/12/07/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-2">own article</a>!</p>https://blog.cm-dm.com/post/2012/11/30/En-route-to-Software-Verification%3A-one-goal%2C-many-methods-part-1#comment-formhttps://blog.cm-dm.com/feed/atom/comments/73V&V: verification & validation, doing it right.urn:md5:48e162ea5cac37fb906660986f5187862012-11-16T12:34:00+01:002012-11-30T12:08:57+01:00MitchProcessesdevelopment processSoftware ValidationSoftware Verification<p>Writing about V&V in <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-verification">two previous posts</a>, I had a lot of comments from people on a well-known social network. They made corrections to my view of V&V and brought their own definitions.<br />
Here is an excerpt of their comments.</p> <h4>Doing the right product and the product right</h4>
<p>Someone gave these two definitions:</p>
<ul>
<li>Validation is doing the right product.</li>
<li>Verification is doing the product right.</li>
</ul>
<p>I like these two definitions: they are concise and are good mnemonics.<br />
<strong>If there is one think you should remember from this post, it's these two definitions!</strong></p>
<h4>Validation is validating requirements</h4>
<p>Yes, to do the right product, it's necessary to validate the requirements.<br />
But not only the requirements, it's also necessary to validate that the final product matches the initial concept that was described in those requirements.</p>
<h4>Verification and validation aren't sequential</h4>
<p>Yes, verification and validation aren't sequential. Validation begins before verification. Even before coding. The first step of validation is validating requirements to ensure that the product is well defined.<br />
But verification ends before validation. I can't validate software that hasn't been verified from A to Z, before. How could I validate software with functions that haven't been tested?<br />
To reconcile everyone, I should have written in my last two posts that the end of verification happens before the end of validation. I edited my last two posts about V&V accordingly.<br /></p>
<h4>Validation is broader than verification</h4>
<p>Yes, they're true, validating a device goes beyond the scope of software (except for standalone software device). That's why some people talk about software validation and device validation as two separate concepts.<br />
But for software taken alone, the scope of software verification and software validation is:</p>
<ul>
<li>Software, and</li>
<li>Its documentation.</li>
</ul>
<p>Here's my rationale:<br />
Every input data:</p>
<ul>
<li>Intended use,</li>
<li>Risk assessment,</li>
<li>Regulatory requirements,</li>
<li>Usability requirements and,</li>
<li>Last but not least, user requirements, and so on ...</li>
</ul>
<p>Can be translated into more detailed requirements :</p>
<ul>
<li>Use case scenarios,</li>
<li>Functional requirements and non functional requirements, and</li>
<li>Documentation/labelling requirements.</li>
</ul>
<p>These more detailed requirements can be translated into:</p>
<ul>
<li>Architecture,</li>
<li>Interfaces and,</li>
<li>More detailed software requirements, and</li>
<li>Software units.</li>
</ul>
<p>Which are translated into:</p>
<ul>
<li>Software code,</li>
<li>Configuration data,</li>
<li>User documentation and administrator/maintenance documentation.</li>
</ul>
<p>All of these artifacts are tightly bound by traceability.<br />
So, when I verify software and its documentation, my software verification has the same scope as my software validation. And I ensure this is true through traceability from top-level requirements to most refined requirements, software units, software code and their tests.</p>
<h4>Why dissociating device V&V and software V&V?</h4>
<p>In all of this discussion, I made the assumption that device V&V and software V&V can be differentiated. One could argue that it's not relevant to make any difference. It's the device that everybody wants to validate, in the end.<br />
I think that device V&V and software V&V should be differentiated for technical reasons, when software is prominent in a device, e.g. when up to 50% of requirements are addressed by software:</p>
<ol>
<li>Though everybody wants to minimize it, software is a source of complexity, hence users tend to think it's possible to add new functions with a few mouse clicks of an engineer,</li>
<li>Software development process has its own pace. It's possible to have prototypes very quickly (also true with hardware and fast prototyping) but it takes a lot of time and rework to make a usable product,</li>
<li>Using simulators, it's not necessary to have the final hardware ready to verify and validate software,</li>
<li>Validating software includes validating its graphical user interface, which can be a long process spent with end-users, if the GUI is complex,</li>
<li>Testing software takes a long time and it's difficult to anticipate all software failures.</li>
</ol>
<p>I could have found tons of arguments to show that validating software can be separated from validating a device.<br />
The last argument I could use: it's because regulations ask me to do so. The CE Mark directive demands software validation (Annex I.I.12.1.a of current directive and even more in the future directive to be released in 2014). The 21.CFR 820.30 (g) requires the same in the US and the FDA released 15 years ago a guidance about General Principles of Software Validation that is still in force.<br /></p>
<h4>So, where is the truth?</h4>
<p>I haven't seen definitions of software verification and software validation that are accurate. Every company or consultant has its own recipe (I'm provocative). They all work, so far, as users are happy with most of devices placed on the market.<br />
Seeking for a common definition of terms that we use every day, like software verification and software validation, would be a good way to:</p>
<ol>
<li>Describe best practices,</li>
<li>Have people apply these practices.</li>
</ol>
<p>Such a job goes beyond what I can do in this blog! It could be a subject of update of the IEC 62304 standard. Today it stops at the end of software verification. Perhaps it could add definition and requirements for software validation.</p>https://blog.cm-dm.com/post/2012/11/16/VV%3A-verification-validation%2C-where-s-the-truth#comment-formhttps://blog.cm-dm.com/feed/atom/comments/71What is software validation?urn:md5:0088832b3f7e9ca2d2697960ac6a4c272012-11-02T14:11:00+01:002012-11-13T19:44:02+01:00MitchProcessesdevelopment processIEC 62304Software ValidationSoftware Verification<p>Following <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-verification">the article about software verification</a>, let's see what software validation is.</p> <h3>Validation after verification</h3>
<p>If you've read the other article this is no news for you that the end of validation happens after the end of verification.<br />
In fact, validating a device is ensuring that it conforms to defined user needs and intended uses. In the light of this definition, verification is a part of the whole process of validation.<br />
Before ensuring that it conforms to user needs, the functions of the device have to be:</p>
<ol>
<li>described with software requirements and architecture,</li>
<li>implemented with code,</li>
<li>and tested.<br /></li>
</ol>
<p>So the requirements and the architecture have to be validated before they're implemented. That's why there are reviews of requirements, general conception and detailed conception before verification. These reviews participate to the validation process.<br />
When requirements and architecture are validated, the software is implemented and tested through verification.<br />
However verification tests as they are required by the IEC 62304 standard are not enough. Specific tests, i.e. clinical tests or end-user tests in real conditions, can be done to validate the device. So yes, some activities of validation happen after verification.<br />
<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-05_m.jpg" alt="software in medical devices - Validation includes verification activities plus additional tests" style="display:block; margin:0 auto;" title="software in medical devices - Validation includes verification activities plus additional tests, oct. 2012" />
<br /></p>
<h4>Goals of validation</h4>
<p>The goals of validation are not purely technical, compared to those of verification. Validation shall answer these questions:</p>
<ul>
<li>Does software conform to its intended use?</li>
<li>Is clinical use effective and efficient?</li>
<li>Are risks mitigated?</li>
<li>Is the risk / benefit ratio favorable?</li>
<li>Are the requirements enforced by national regulations met?</li>
</ul>
<p>Knowing these goals, it's easy to see that pure technical tests done on a tests platform are not enough.</p>
<h4>Who does Validation tests</h4>
<p>Since validation tests are not pure technical tests, it's the role of physicians, people with clinical knowledge to do validation tests. For this reason, validation tests are done in real environment, i.e. in healthcare centers.<br />
Software teams don't do validation tests but they support them. If there is a bug a problem with software during validation, it's the role of software developers to find out what's wrong. And fix it!<br /></p>
<h3>Activities done for validation</h3>
<p>Let's see what activities are required after the verification to complete the validation.</p>
<h4>Clinical tests</h4>
<p>The most obvious type of activity is clinical tests. From the point of view of IEC 62304, supplying software for clinical tests is equivalent to delivering software to end-users. Thus clinical tests are the beginning of software maintenance regarding IEC 62304.<br />
Practically, the technical conditions in which software is used in clinical tests can be very close those of verification. Especially for standalone software: an end-user with clinical knowledge tests software on a PC.<br />
The first main difference is the testing protocol, which, for clinical tests is highly formalized. The second main difference is the use of software with real patients. This is not 100% true, though, with standalone software. Standalone software can be tested with real data sets that were archived even before software was designed.</p>
<h4>No clinical tests</h4>
<p>Depending on the regulations in the countries where software will be sold, clinical tests may not be necessary. Usually software, which already have an equivalent on the market can be validated with existing clinical data.<br />
In this case there are no clinical tests. Software testing either stops after the end of verification, or finishes by a last phase of free tests in simulated clinical conditions.<br />
<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-06_m.jpg" alt="software in medical devices - Tests in validation: Clinical Tests or Free tests in simulated clinical conditions" style="display:block; margin:0 auto;" title="software in medical devices - Tests in validation: Clinical Tests or Free tests in simulated clinical conditions, oct. 2012" /></p>
<h4>Software validation and Device validation</h4>
<p>When software is embedded in a device, there may be two types of validation:</p>
<ul>
<li>software validation: validation of software only,</li>
<li>device validation: validation of the device with software inside.</li>
</ul>
<p>So, there may be only one validation of the whole device or two validations: one for software and one for the device. It depends deeply on the type and complexity of software embedded in the device.<br />
If the functions ensured by software are very complex and/or critical, a separate validation of software may be deemed necessary. Separating software validation is a way to reduce the complexity of the validation of the whole device.<br />
<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-07_m.jpg" alt="software in medical devices - software validation and system/device validation" style="display:block; margin:0 auto;" title="software in medical devices - software validation and system/device validation, oct. 2012" /></p>
<h3>Validation review</h3>
<p>The validation ends with a validation review with people who participated to the design, the verification tests, and the validation tests. An "independent reviewer" (someone who didn't participate to design and validation tasks) may also be required by some national regulations. The purpose of the validation review is ensuring that all goals enumerated above are met:</p>
<ul>
<li>software conforms to its intended use,</li>
<li>clinical use is effective and efficient,</li>
<li>risks are mitigated,</li>
<li>the risk / benefit ratio is favorable,</li>
<li>the requirements enforced by national regulations are met.</li>
</ul>
<p>For embedded software, there may be two validation reviews, if software is validated separately before the whole device.<br />
<br />
The validation review marks the end of the whole process of validation. Usually people use the word "validation" with both meanings: the big validation process or the validation review. The validation review is the successful end of efforts stretched on months or years. Thus some people think of validation with the meaning of validation review.<br />
<br />
<br />
<br />
When software is validated, the phase of maintenance begins. But this is another story I'll tell in another article!</p>https://blog.cm-dm.com/post/2012/08/30/What-is-software-validation#comment-formhttps://blog.cm-dm.com/feed/atom/comments/63What is software verification?urn:md5:1e75b060409afc3e60b8cff084082ceb2012-10-26T14:09:00+02:002012-11-13T19:29:42+01:00MitchProcessesAgiledevelopment processIEC 62304Software ValidationSoftware Verification<p>Many people make the confusion between verification and validation. There is no exception for software! I'd even say that the confusion is even worse for standalone software.<br />
<br />
Let's see first the definition of verification and validation. I borrowed these definitions from the FDA website:</p>
<ul>
<li>Verification is confirming that design output meets the design input requirements,</li>
<li>Validation is ensuring that the device conforms to defined user needs and intended uses.</li>
</ul>
<p>OK, this remains theoretical. How to do that with software medical devices?<br />
In this article I focus on verification and will focus on validation in the next article: <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-validation">What is software validation</a>.</p> <h3>Verification before validation</h3>
<p>This is very basic but I wanted to repeat it: <strong>the end of verification happens before the end of validation</strong>.<br />
If we place the two activities in the software development cycle, verification happens after coding and software integration activities. Hence there's nothing to verify or validate as long as there's no software or documentation.<br />
However, the position of verification activities depends deeply on the software development cycle.</p>
<h4>Position in the waterfall software development</h4>
<p>In the classical waterfall software development cycle, the verification activities are placed after coding activities. Thus the verification encompasses:</p>
<ul>
<li>Unit tests,</li>
<li>Integration tests,</li>
<li>Alpha 1 tests,</li>
<li>Alpha n tests (if any) ...</li>
<li>Beta 1 tests,</li>
<li>Beta n tests (if any) ...</li>
</ul>
<p>The naming of the tests phases is given here as an example. You may name your test phases as you wish.<br />
Just remember that there's no tests phase in real clinical conditions in this example.<br />
<br />
The diagram below summarizes that. Verification activities are highlighted with a shaded blue background. Note that the shaded blue:</p>
<ul>
<li>covers a part of coding and integration. This is where unit and integration tests take place,</li>
<li>doesn't cover the end of the cycle (where validation happens).<br /></li>
</ul>
<p><br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-01_m.jpg" alt="Software in medical devices - Software verification Encompasses unit tests, integration tests, and alpha, beta tests" style="display:block; margin:0 auto;" title="Software in medical devices - Software verification Encompasses unit tests, integration tests, and alpha, beta tests, oct. 2012" /></p>
<h4>Position in agile software development</h4>
<p>Agile development is a continuous development cycle. Every activity of software development happens during each iteration.<br />
If an iteration implements new functions, these functions have to be verified. Even if an iteration is only made of bug fixes, these fixes have to be verified. So, verification happens during each iteration. <br />
The diagram below highlights with a shaded blue background the verification activities in an iteration.<br />
<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-02_m.jpg" alt="Software in medical devices - Software verification activities in agile iteration" style="display:block; margin:0 auto;" title="Software in medical devices - Software verification activities in agile iteration, oct. 2012" />
<br />
But iterations are only the core of agile software development in regulated environment. As I already explained <a href="https://blog.cm-dm.com/post/2012/05/12/How-to-develop-medical-device-software-with-agile-methods">in the series of articles How to develop medical device software with agile methods</a>, iterations shall be followed by a phase of consolidation.<br />
The consolidation aims at verifying that what was developed earlier is a consistent, risk free and bug free software.<br /> So the consolidation phase also contains verification activities.<br />
Whereas iterations contain incremental verification, consolidation contains a second pass of verification of software.<br />
<br />
The diagram below summarizes that. In shaded blue, the phases where verification happens.<br />
<br />
<img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-03_m.jpg" alt="software in medical devices - Continuous software verification during Iterations and in Consolidation" style="display:block; margin:0 auto;" title="software in medical devices - Continuous software verification during Iterations and in Consolidation, oct. 2012" /></p>
<h3>IQ, OQ and PQ</h3>
<p>Software is not present only in the medical devices but also in the production plants of medical or pharmaceutic products.<br />
When software is used in an equipment to control the production or manufacturing of products, it has to be verified and validated.<br />
The canonical model of verification of such equipment (software or not) is made of three phases:</p>
<ol>
<li>Installation Qualification,</li>
<li>Operational Qualification,</li>
<li>Performance Qualification.</li>
</ol>
<p>Note that the word <em>qualification</em> is used instead of <em>verification</em>. I won't argue on the differences between those two words.<br />
This model is a very robust one and we could get some ideas of it to verify medical device software.</p>
<h4>Installation qualification</h4>
<p>The Installation Qualification verifies that the equipment was installed according to installation drawings and specifications.<br />
For software, this means that your verification tests shall include the install procedures. One or more tests shall exists and verify the installation of software.</p>
<h4>Operational Qualification</h4>
<p>The Operational Qualification includes tests cases of start-up, operational use, maintenance, safety functions, alarms of an equipment.<br />
For software this means that your verification tests shall include tests of all these functions and states. Your verification tests shall also include the inspection of software documentation delivered to the end-user.<br />
Software verification is not limited to the main functions. All functions including the less used shall be tested one by one. Same for software documentation that shall be inspected to verify it contains information required to use your software in safe conditions.</p>
<h4>Performance Qualification</h4>
<p>The Performance Qualification aims at verifying that the user requirements and safety requirements are fulfilled.<br />
For software this means that your verification tests shall contain scenarios to test user requirements and safety requirements. These scenarios shall be close to real use, compared to the tests of functions one by one made in the previous phase.</p>
<h4>Software verification categories</h4>
<p>To sum-up what I said above, the software tests cases shall contain verification of:</p>
<ul>
<li>Software installation,</li>
<li>Software documentation delivered to end user,</li>
<li>Functions of software one by one,</li>
<li>Scenarios of use of software.</li>
</ul>
<p>On top of that, we can add the verification made by:</p>
<ul>
<li>Unit or automated tests and,</li>
<li>Integration tests.</li>
</ul>
<p>Unit and integration tests should be planned before other tests mentioned above. That's logical, when you integrate software, you can't test the installation procedure or the functions from A to Z.</p>
<h3>Software embedded or standalone</h3>
<p>Verification activities are different if software is embedded or standalone.<br /></p>
<h4>Standalone software</h4>
<p>For standalone software, all tests described above follow their "own" planning. There are limited constraints to the planning of tests. Most of time, only test data and/or system in interface should be missing or - worse - end users who do the test scenarios.<br />
Test data, systems in interface and end-users are definitely the critical resources of tests.</p>
<h4>Embedded software</h4>
<p>For embedded software, the constraints on tests are usually more important. The planning of software tests depends on the availability of hardware.<br />
The verification is usually split in two phases:</p>
<ul>
<li>Tests in simulated hardware environment,</li>
<li>Tests with real hardware.</li>
</ul>
<p>Both phases may contain the tests sub-phases described above: installation, documentation, functions one by one, and so on. This can make the planning of verification tests quite long and complex.<br />
About critical resources, embedded software have those of standalone software and add the hardware, which the software runs on.<br /></p>
<h3>Who verifies it and where?</h3>
<h4>Who?</h4>
<p>People who test software aren't the same in every phase of verification.<br /></p>
<h5>Different people at different stages</h5>
<p>During the early stages of verification, tests are done by engineers:</p>
<ul>
<li>Unit tests by software developers,</li>
<li>Integration tests by integrators (or software testers),</li>
<li>Deep testing of functions one by one by software testers (or integrators)</li>
</ul>
<p>During the late stages of verification, tests are by people with knowledge of operational use of software:</p>
<ul>
<li>Test of scenarios by product champions (or software testers with operational knowledge or selected end-users).</li>
</ul>
<p><img src="https://blog.cm-dm.com/public/17-verif-valid/.verif-valid-04_m.jpg" alt="Software in medical devices - Software verification is made by software engineers, integrators, testers, end-users" style="display:block; margin:0 auto;" title="Software in medical devices - Software verification is made by software engineers, integrators, testers, end-users, oct. 2012" /></p>
<h5>Operational not clinical</h5>
<p>If you re-read the paragraph before the diagram just above, you'll see that I wrote "operational use", neither "clinical use" nor "medical use".<br />
This distinction is very important. Even if it is of high value to have someone with clinical knowledge who tests software during verification phases, <strong>there is absolutely neither clinical nor medical use of software during verification</strong>.<br />
<br />
It is possible to have a doctor who tests your software with scenarios, or even free tests during verification.<br />
With a standalone software, you may have the feeling that the conditions are the same (basically a doctor using software on a PC), whether in verification or in validation.<br />
This is where I think resides the confusion between verification and validation, especially for standalone software.<br />
But I'm going to far. I should leave matter for my next article!</p>
<h4>Where?</h4>
<p>Software passes through various locations and platforms before being installed in the machine/PC of the end-user.<br />
The platforms on where software is installed depends on the test phase and whether software is standalone or not.<br />
For standalone software:</p>
<ul>
<li>Unit tests on software development platform,</li>
<li>Software Integration tests on software development platform or software integration platform,</li>
<li>Deep testing of functions one by one on software integration platform or on test platform with real interfaces,</li>
<li>Test of scenarios on test platform with real interfaces.</li>
</ul>
<p>For embedded software:</p>
<ul>
<li>Unit tests on software development platform,</li>
<li>Software Integration tests on software development platform or software integration platform,</li>
<li>Hardware + Software Integration tests on hardware integration platform,</li>
<li>Deep testing of functions one by one on hardware/software integration platform or on test platform with real equipment,</li>
<li>Test of scenarios on test platform with real equipment.</li>
</ul>
<p>Again, even if tests are realized on the real equipment/PC with an end-user with medical knowledge, they remain verification tests.<br />
All of this is not a secret, there's a lot about testing in literature. I just wanted to stress out what verification is.<br />
<br />
<br />
As we've seen here, the verification tests are exclusively technical. I never talked about intended use or clinical assessment and how they intersect/match with verification tests. This is the purpose of <a href="https://blog.cm-dm.com/post/2012/08/30/What-is-software-validation">my next article about software validation</a>.</p>https://blog.cm-dm.com/post/2012/08/30/What-is-software-verification#comment-formhttps://blog.cm-dm.com/feed/atom/comments/62The concept of continuous certification: get rid of the Big Freezeurn:md5:23e52c9f6915eac9c0dcccd62babf68d2012-08-31T16:47:00+02:002012-09-03T19:44:52+02:00MitchProcesses<p>Safety critical software always face the big freeze before certification.<br />
This happens because watefall model is the prefered software development cycle for safety critical software. Thus you can't change anything if you're in qualification phase for certification.<br />
To be more flexible, some smart people created the concept of continuous certification. The purpose of continuous certification is to apply the principles of agile methods to safety critical software development.</p> <p>Using agile methods allows breaking the big freeze. Hence everything can change at each iteration. Agile methods are limited by constraints of safety critical software development. As I wrote in <a href="https://blog.cm-dm.com/post/2012/07/02/Class-C-software-and-agile-methods">Class C software and agile methods</a>, user requirements shouldn't change in iterations. But small changes in functions, in ergonomics and in architecture should be allowed.<br /></p>
<h4>What happens if I push this button?</h4>
<p>To give you an image, changing something in a safety critical software may be like leaving Leslie Nielsen pushing a button in <a href="http://en.wikipedia.org/wiki/Airplane!">Airplane!</a> cockpit.<br />
If you want to be 100% sure of what you do when you change something:</p>
<ul>
<li>Either you're a guru who knows everything about his software and can assess the consequences of the changes,</li>
<li>Or you have a tool that gives you all the artifacts touched by the change.</li>
</ul>
<p>The second option is the principle of continuous certification. Each time you change something, you know exactly what is going to be touched and what has to be retested/requalified.<br /></p>
<h4>Tools for continuous certification</h4>
<p>Software development documentation uses extensively traceability. In most of my templates, like the <a href="https://blog.cm-dm.com/post/2012/01/30/New-template%3A-Software-Tests-Plan">Software Tests Report</a>, there are traceability tables.<br />
The tools that manage continuous certification extends the principle of traceability to (almost) every bit of software documentation and traceability. This kind of tool has:</p>
<ul>
<li>databases to store the software structure and the traceability between artifacts,</li>
<li>rules to verify that structure is consistent,</li>
<li>algorithms to walk through the structure and find structural "bugs",</li>
<li>formal language to let humans fill the database.</li>
</ul>
<p>The data structure is pretty much like a tree, with user requirements in the trunk and tests cases in the leaves.
For example, when something has to be changed:</p>
<ul>
<li>the change is coded in formal language,</li>
<li>algorithms walk through the tree to find what is touched by the change,</li>
<li>other algorithms may raise issues if the change makes the structure inconsistent,</li>
</ul>
<p>If the results of algorithms are OK, then the change is coded. Some more classical tools like static analysis can be run to ensure that the code is bug-free.<br />
To make a comparison, this is like unit tests, but far more complex:</p>
<ul>
<li>when you change someting in the code, units tests show you immediately if it works,</li>
<li>when you change something in the requirements, continuous certification tool shows you immediately what is touched.</li>
</ul>
<h4>Do these tools exist?</h4>
<p>An open-source projet has the ambitious goal to deliver such a tool: <a href="http://www.open-do.org/">Open-DO project</a>.<br />
The <a href="https://forge.open-do.org/plugins/moinmoin/qmachine/FrontPage">Q-Machine</a> is the tool that does what I explained above. It is not fully integrated to well-known IDE like Eclipse or GNU tools (should be soon I think). But this is a big change in the software development. Open-DO is still under construction. Thus you don't have all necessary tools for static analysis, code coverage and the like. But the team is working hard on it and <a href="http://www.open-do.org/2011/06/07/safety-and-security-concerns-in-medical-device-software/">knows its benefits for the software medical device industry</a>.</p>
<h4>Conclusion</h4>
<p>I wrote once that combining safety critical software with agile methods is like herding cats. I hope tools sets like Open-DO will make me definitely change my mind.<br />
<br />
<br />
A big thanks to Loïc of <a href="http://www.realtimeatwork.com">RealTime-at-Work</a> who gave me the link to Open-DO.</p>https://blog.cm-dm.com/post/2012/08/31/The-concept-of-continuous-certification%3A-get-rid-of-the-Big-Freeze#comment-formhttps://blog.cm-dm.com/feed/atom/comments/3Class C software and agile methodsurn:md5:f152b472a3a6b4af0bb3e8b3a1459bd62012-07-06T13:17:00+02:002012-07-07T10:08:35+02:00MitchProcessesAgilecritical softwaredevelopment processIEC 62304ISO 14971risk management<p>Are agile methods compatible with the constraints of development set by IEC
62304 standard of class C software?<br />
After a <a href="https://blog.cm-dm.com/post/2012/06/08/How-to-combine-risk-management-process-with-agile-software-development">
series of three posts about agile methods and risks analysis</a>. I focus in
this post on IEC 62304 class C critical software.</p> <h4>Agile paradigm and critical software</h4>
<p>If I had to sum-up in one sentence the constraints of critical software, it
would be:<br />
<strong>The more critical, the less user requirements.</strong><br /></p>
<h5>Critical software don't like complexity</h5>
<p>Complexity is a direct source of bugs, so critical software don't like
complexity.<br />
Multiple user requirements are a source of complexity, the more user
requirements, the more workflows, components, branches, conditions and the
like. So the best way to have a critical software run without bugs is to limit
the numbers of user requirements.<br />
Limiting the number of user requirements doesn't mean that we limit the number
of software requirements deduced from the user requirements. In other words,
few user requirements and a lot of software requirements cutting into small
pieces the goals of user requirements.</p>
<h5>Users doing critical tasks don't like complexity</h5>
<p>If you have to do a critical task, say something very critical, like a
Cardiopulmonary Resucitation, you don't want a complex scenario. Complexity
adds stress and you don't want it. You prefer if it is very simple. If the
scenario is not simple, you are trained to act as defined according to that
scenario. This is the case of doctors who spent 10 years of their lives
learning by heart subtilities of medical protocols.<br />
So, when defining the requirements of critical software, users will be prone
to:</p>
<ul>
<li>Define the software functions for A to Z, thinking to all possibilities
according to bibliography and their experience,</li>
<li>Define a simple software.</li>
</ul>
<p>If users aren't prone to define something simple, you should guide them to
do so.<br />
This is the opposite situation of agile methods, where software definition
evolves a lot during design. This is also the ideal situation for the waterfall
model.</p>
<h5>Would user add new user requirements to critical software?</h5>
<p>You could object that it is difficult to think about everything in a
software before implementing it. If you do a presentation of a software in the
middle of the development, no doubt that beta test users will suggest or impose
new user requirements.<br />
But there is a good chance that the new requirements won't need to reconsider
the architecture or refactor the whole software. We are more in a situation of
a waterfall model with multiple runs, like the ones I showed in <a href="https://blog.cm-dm.com/post/2012/05/12/How-to-develop-medical-device-software-with-agile-methods">the
first two diagrams of this post</a>.<br />
So, agile methods aren't really adapted to critical sofware because the agile
paradigm is not respected.<br />
In other words: <strong>No changes in user requirements = no agile
method.</strong></p>
<h4>Separating very critical parts</h4>
<p>One way to mitigate risks is to separate the most critical parts in the
architecture of your software (as asserted in the IEC/TR 80002-1 guideline).
For example, the critical part runs on a microcontroller and the rest runs on
an embedded PC on linux. The first is class C, the last is class B.<br /></p>
<h5>Organizing the development of critical parts</h5>
<p>The separation of critical parts requires a good experience of software
architecture. I can't give advice about architecture, it's too dependent on
technology. But it doesn't prevent me to give my advices about the kind of
organization that manages that kind of software development!<br />
Steps are the following:</p>
<ul>
<li>identify class C critical part</li>
<li>do it in a separate sub-project, with less agile method or absolutely not
agile methods like waterfall,</li>
<li>do the less critical (or not critical) part with agile methods.</li>
</ul>
<p>Doing the less critical part with agile methods doesn't mean that it is open
bar! You still have to respect the contraints of medical software development
with agile paradigm!</p>
<h5>A way to develop the critical software</h5>
<p>The pitfall of this organization is the interface between the critical and
non critical part. There is a good change that the interface can introduce bugs
in the critical part if the interface of the less/non critical part evolves a
lot. To do so, you have to fix the interfaces early:</p>
<ul>
<li>Separate interface development from the rest of the project,</li>
<li>Implement interfaces of both parts,</li>
<li>And implement simulators of components that use the interfaces to verify
they work.</li>
</ul>
<p>The condition to do so is to have a validated design of the critical part
and its interfaces. This is not a 100% efficient solution but it can avoid a
lot of problems with the interfaces. The drawback is to have few or zero
possibilities of evolution of the interfaces when the non critical part
evolves.</p>
<h5>Software development team</h5>
<p>If you have only one unified team (this is a prerequisite of agile methods)
and the same technologies for critical and non-critical, begin with the
development of the critical part with the waterfall and continue with the
development of the non critical part with agile method.<br />
But I think this is rarely the case. There is a good chance that you have two
different teams, one for the critical part and one for the rest, especially if
technologies (framework, language, OS) are not the same in both parts.<br />
This means that both teams have to communicate and work together. Team
communication is also a source of bugs and this is the role of project managers
to avoid quiproquos and misunderstandings (something to handle in the project
management plan).</p>
<h4>Conclusion</h4>
<p>The underlying reflexion in this post is that there is no miracle solution
for the development of critical software.<br />
Very critical software are made with waterfall or equivalent method. Agile
software development and critical software development are somewhat antonyms. I
tried to give my own solutions but their principle is to separate the critical
part from the rest... and to develop critical part with a regular method.<br />
<br />
Applying agile methods to constrained and critical software is like <a href="http://youtu.be/Pk7yqlTMvp8">herding cats</a>!</p>https://blog.cm-dm.com/post/2012/07/02/Class-C-software-and-agile-methods#comment-formhttps://blog.cm-dm.com/feed/atom/comments/14