Software in Medical Devices, by MD101 Consulting

To content | To menu | To search


V&V: verification & validation, doing it right.

Writing about V&V in two previous posts, I had a lot of comments from people on a well-known social network. They made corrections to my view of V&V and brought their own definitions.
Here is an excerpt of their comments.

Doing the right product and the product right

Someone gave these two definitions:

  • Validation is doing the right product.
  • Verification is doing the product right.

I like these two definitions: they are concise and are good mnemonics.
If there is one think you should remember from this post, it's these two definitions!

Validation is validating requirements

Yes, to do the right product, it's necessary to validate the requirements.
But not only the requirements, it's also necessary to validate that the final product matches the initial concept that was described in those requirements.

Verification and validation aren't sequential

Yes, verification and validation aren't sequential. Validation begins before verification. Even before coding. The first step of validation is validating requirements to ensure that the product is well defined.
But verification ends before validation. I can't validate software that hasn't been verified from A to Z, before. How could I validate software with functions that haven't been tested?
To reconcile everyone, I should have written in my last two posts that the end of verification happens before the end of validation. I edited my last two posts about V&V accordingly.

Validation is broader than verification

Yes, they're true, validating a device goes beyond the scope of software (except for standalone software device). That's why some people talk about software validation and device validation as two separate concepts.
But for software taken alone, the scope of software verification and software validation is:

  • Software, and
  • Its documentation.

Here's my rationale:
Every input data:

  • Intended use,
  • Risk assessment,
  • Regulatory requirements,
  • Usability requirements and,
  • Last but not least, user requirements, and so on ...

Can be translated into more detailed requirements :

  • Use case scenarios,
  • Functional requirements and non functional requirements, and
  • Documentation/labelling requirements.

These more detailed requirements can be translated into:

  • Architecture,
  • Interfaces and,
  • More detailed software requirements, and
  • Software units.

Which are translated into:

  • Software code,
  • Configuration data,
  • User documentation and administrator/maintenance documentation.

All of these artifacts are tightly bound by traceability.
So, when I verify software and its documentation, my software verification has the same scope as my software validation. And I ensure this is true through traceability from top-level requirements to most refined requirements, software units, software code and their tests.

Why dissociating device V&V and software V&V?

In all of this discussion, I made the assumption that device V&V and software V&V can be differentiated. One could argue that it's not relevant to make any difference. It's the device that everybody wants to validate, in the end.
I think that device V&V and software V&V should be differentiated for technical reasons, when software is prominent in a device, e.g. when up to 50% of requirements are addressed by software:

  1. Though everybody wants to minimize it, software is a source of complexity, hence users tend to think it's possible to add new functions with a few mouse clicks of an engineer,
  2. Software development process has its own pace. It's possible to have prototypes very quickly (also true with hardware and fast prototyping) but it takes a lot of time and rework to make a usable product,
  3. Using simulators, it's not necessary to have the final hardware ready to verify and validate software,
  4. Validating software includes validating its graphical user interface, which can be a long process spent with end-users, if the GUI is complex,
  5. Testing software takes a long time and it's difficult to anticipate all software failures.

I could have found tons of arguments to show that validating software can be separated from validating a device.
The last argument I could use: it's because regulations ask me to do so. The CE Mark directive demands software validation (Annex I.I.12.1.a of current directive and even more in the future directive to be released in 2014). The 21.CFR 820.30 (g) requires the same in the US and the FDA released 15 years ago a guidance about General Principles of Software Validation that is still in force.

So, where is the truth?

I haven't seen definitions of software verification and software validation that are accurate. Every company or consultant has its own recipe (I'm provocative). They all work, so far, as users are happy with most of devices placed on the market.
Seeking for a common definition of terms that we use every day, like software verification and software validation, would be a good way to:

  1. Describe best practices,
  2. Have people apply these practices.

Such a job goes beyond what I can do in this blog! It could be a subject of update of the IEC 62304 standard. Today it stops at the end of software verification. Perhaps it could add definition and requirements for software validation.



Comments

1. On Friday, 16 November 2012, 15:25 by Loïc Fejoz

About definition of Verification and validation, and more generaly for definition in the field of design and system building, it is interested to have a look at the SEBoK (Systems Engineering Book of Knowledge) glossary. They aggregate several definition in one place.

See
http://www.sebokwiki.org/1.0/index....
http://www.sebokwiki.org/1.0/index....

Note that you would probably find some interesting materials in the SWEBoK (Software Engineering Book of Knowledge).

Anyway your article is a good reminder for everyone...

Add a comment

Comments can be formatted using a simple wiki syntax.

This post's comments feed