Software in Medical Devices, a blog by MD101 Consulting

To content | To menu | To search

En route to Software Verification: one goal, many methods - part 2

In my last article, I talked about the most classical methods used to verify software: human testing (driven by test cases or not) and unit tests. I was about to talk about static analysis, that I place at a higher level of complexity in the list of verification methods, but I have to say a bit more about unit tests.

Discussion about unit tests vs verification

The main characteristic of units tests is that they change the way developers work:

  • Good unit tests are written before coding,
  • Less good unit tests are written after coding.

I can't write that unit tests written after coding are bad. It's always good to write unit tests. Just that they are less good if written a posteriori or ... during a phase of reverse-engineering (yes, it happens. Don't blame teams who work like that. You never know...).
However, there is a case where unit tests shall be written a posteriori. It's when a bug is found and fixed. The unit test is written along with the bug fix to maximize the odds of avoiding regressions in future versions.

Units tests = coding

A good implementation of unit tests requires changing the way developers design and code. First you think about the function, then you write the test, and finally you write the code.
So this is not so obvious to do unit tests the canonical way. Developers need training to be efficient at writing unit tests, and -most of all- should manifest their willingness to do so.
A new agile development method was even created: the Test Driven Development. It makes an intensive use of very early unit tests combined with agile development in short loops.

Unit tests = verifying

Unit tests are a tool to verify that software runs the way it was designed. So they are definitely a part of methods used during verification.
But they happen very early in the development process, before the "true" verification phase. If they are not well implemented during coding phases, it's extremely difficult, time consuming and expensive to write them during the verification phase.
Classical software tests cases may be modified or completed during the verification phase, if they were not well prepared enough. But this is hardly possible with unit tests, because during verification developers spend all their time to fix bugs.
They don't have time to add more unit tests on components where bugs don't show-up. It's too late.

Unit tests rock

Unit tests are a very powerful tool. But writing unit test and coding are intricated (should I write intertwined?) activities. On top of consequences, they need some changes in the habits of the software development team.
That's why unit tests are not jazzy but definitely rock.


Let's continue with static analysis, another very powerful tool. Contrary to unit tests, there's little to do from the point of view of the developer to run static analysis. Most tools are run at build time and the main effort is to interpret the report generated by the tool.

Checking the code

Static analysis can be seen as code checks and unit tests that other developers have thought about and done for you. The main advantage of static analysis is its ability to scan all the code to find issues and report them.
It's the best way to whip out the most basic bugs found in C language like uninitialized memory, or null pointers and so on. Open-source static analysis engines can do these types of basic checks:

  • Basic errors like those mentioned above,
  • Programming rules,
  • and also code qualimetry (dead code, length of methods ...).

I say that these checks are basic, but writing the engines that do these checks is absolutely not a basic code! Since all are based on syntactic and semantic analysis, it's as complex as writing a compiler or an interpreter. That's why some people consider that these checks should be present in the compilers.
Some compilers actually do it because the language specs already contain rules that make these checks mandatory, like ADA or C# or MISRA C.
So the frontier between an advanced compiler and a static analysis engine is sometimes blurred.

Avanced code inspection

The most advanced static analysis engines can go far beyond these basic checks. For example finding conditions when:

  • a division by zero or arithmetic overflow occurs,
  • a loop doesn't exit,
  • a database deadlock happens.

There are as many possibilities as there are programs.
With such tools, the advantage is to cover cases you haven't thought. The drawback is that some lazy people might lay on the tool to find bugs.
Another drawback of such tools is that you can't rely on just one of them. Each tool has its own checks that another one hasn't. A lot of checks overlap but there are always grey areas.
It's definitely true that using one is going to decrease the number of hidden bugs in code. But, theoretically, it would be better to use more than one, which probably is not a practically feasible solution.

Which tools

A lot of free and open-source tools exist, with more or less efficiency. They all do basic checks and more or less advanced checks. Here is an partial and non exhaustive list of tools:

All commercial tools available on the market do both types of checks: basic ones and (more or less) advanced ones. I usually don't quote commercial software in my blog but I can make an exception for Polyspace. It was created after the crash of Ariane 5 european rocket. The bug in the calculator responsible for the crash was known as "impossible to find with automated tests". Now it is, thanks to Polyspace!

Static analysis and regulatory requirements

It is definitely a good idea to implement static analysis for devices with mid to high-level of risks, namely class C according to IEC 62304 standard. And optionally for class B software. This is my opinion, there's no requirement in the standard that tells you to do static analysis.
There is no regulatory requirement that make static analysis mandatory for mission critical software either. But it's better to have them in your software development process. As far as I know, there is no FDA guidance that quotes static analysis (except a small line in the guidance about software in a 510k).
Researchers at FDA published an article a few years ago about the benefits of static analysis. This proves the interest of the FDA to static analysis for specific cases quoted in this article.
On the side of CE Mark, I found no data about static analysis. So use this method based on your own assessment of your situation!

I could say a lot more about these methods, like problems with false positives and false negatives, how to interpret static analysis logs. There are dozen of fantastic articles on the web. Static analysis is a vivid and exciting subject. (when I say exciting, it's true for you if you have kept alive the little geek inside you!).

There are even more complex software verification methods. I'll talk about those in the last article of this series en route to software verification!

Add a comment

Comments can be formatted using a simple wiki syntax.

This post's comments feed