Classic

Ask someone how to verify software and he/she will answer you to put a guy in front of the guilty machine with a dozen of testing procedures.
This is actually the most basic way to test software. Doing this, you are sure that:

  • not all cases will be covered (even with hundreds of testing procedures),
  • the time initially reserved to tests in the planning won't be enough,
  • the final software will be full of bugs.

Yet, most of critical bugs should have been whipped out but some bugs could still be hidden somewhere. That's why it's necessary to use other methods to test software.

The second most basic way to test software is to give it to a few selected end-users, after the first phase of tests described above.
Doing this, you are sure that:

  • not all cases will be covered (but less that before),
  • the time reserved to tests by the end-user won't be long (he/she is probably a physician and has lots of others things to do),
  • the final software will be full of bugs (but less than before).

The second bullet point is false if you pay selected end-users to do tests, or if you're lucky to have a passionated end-user.
Nevertheless there will still be remaining bugs in software, most probably:

  • all critical bugs are whipped out,
  • some major bugs could still reside somewhere,
  • there are dozens of minor bugs.

But it works so far. And you don't have enough time to fix anything else, so the device is placed on the market as is.
This is true for devices with low level of risks, namely class A according to IEC 62304 standard or possibly low classes according to national regulations.
Testing all possible cases is not possible within the timeframe and budget of most software development. There are always remaining bugs that are found when the device is already placed on the market. This is not a big deal as long as the remaining bugs don't impair the risk-benefit assessment of the device.

Jazz

Most of bugs are triggered by small errors of consistency in code. For example, not testing if an input parameter is inside a given range of values before starting computations. Unit tests is a method to ensure that these tiny inconsistencies are detected and fixed.
When they became popular a few years ago, unit tests seemed to be a miraculous method to kill bugs in the egg (I personally was an aficionado on unit tests!). But this method is not so miraculous and has its own pitfalls.
By doing this, your are sure that:

  • the time reserved to code unit tests doesn't fit into the planning,
  • some critical bugs and major bugs are whipped out.
  • not all cases of inconsistency are covered (but less than without unit test),

The problem with unit tests is that the software developer is at the center of the process:

  • he/she has to decide which tests to write,
  • he/she could write an erroneous unit test (do we need a unit test to test the unit test ???),
  • he/she hasn't got the time to write it, eventually.

So, unit tests bring a higher level of confidence that the final software has less hidden bugs.
Compared to classic methods, they whip out small inconsistencies that can lead to critical bugs by a chain of events in the algorithms running in software.
They're really complementary to the classic methods. Classic methods are more prone to capture bugs in use cases or to capture bad behaviors compared to high level requirements. Units tests are more prone to capture bugs in workflows or in algorithms at a lower level.
It is definitely a good idea to implement unit tests for devices with mid-level of risks, namely class B according to IEC 62304 standard!

Note

Some languages have their own mandatory units test inside the language specification, like the pioneer Eiffel language. Eiffel requires design by contract. Basically, it requires that each input parameter and each output parameter is tested against a rule implemented in the code of the procedure/method. But Eiffel has always been confidential. And design by contract has only really popped-out with the latest versions of languages like .NET 2.0 C# and Ada 2012.


Going higher in complexity, we'll see next time the static analysis. Another method, which deserves its own article!