The guilty code

Here is the code with the security flaw in Apple's ssl library:

static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen) { OSStatus err; ... if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0) goto fail; if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0) goto fail; goto fail; if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0) goto fail; ... fail: SSLFreeBuffer(&signedHashes); SSLFreeBuffer(&hashCtx); return err; }

Quote from Adam Langley's ImperialViolet blog, who quoted it from Apple's published source code.

How to reduce the probability of such flaw?

Even is it's tempting to blame the developer on its code, that's not the right way to avoid such situation to happen again. It's the development process as a whole that is questioned here.
Namely in every step of the process. Considering that collecting user requirements, writing specifications and designing architecture are not called into question (this is ssl, look at the rfc and stuff, ok?), the situation can be avoided by putting safeguards during coding and testing.

Testing

Tests are a good way to find bugs, not all of them but most of them.
There are plenty of ways to test software (see "the big picture" in this article about tests): unit tests, user tests and so on.

According to Adam's article, the problem could only be discovered by doing very specific tests with a custom-made TLS stack.
Needless to say that such tool would be itself subject to errors. How to test a complex test tool?. It would need IQ, OQ an PQ, to be qualified as the right testing tool. Or, for ISO 13485 aficionados, a tool validation protocol according to section 7.5.2.1 of that ISO standard.

Thus it appears that tests are probably not enough to run after this kind of bug.

Coding

We only have the coding phase left!
There are two possiblities, to trap this kind of bug:

  • Verifying what developers code,
  • Changing the way developers code.
Verifying code

Here we have two possibilities:

  • Human verification,
  • Automated verification.

Human verification is achieved by doing peer code reviews, whereas automated is done with the help of static analysis tools or advanced compiler checks.
Both have their advantages or drawbacks (dealing with humans or dealing with machines designed by humans).

BTW: these kinds of verification are in line with IEC 62304 requirements about software units verification found in sections 5.5.2, 5.5.3 and 5.5.4 of the standard.
So if you want to be in line with IEC 62304 in class B (and C for 5.5.4) I can only urge you to either implement unit tests and/or plan coding reviews.

If you're skeptical about the benefits of code reviews, I invite you to read this previous article about code reviews vs tests.

Changing the way developers code

Here we have two levers:

  • Pair programming,
  • Coding standards.

I can only urge you to try to impose coding standards and try pair programming with your development teams!
IMHO this is the most efficient way to avoid such bug.

Coding standards however are more efficient if there is a static analysis tool to verify that they are applied. They're usually a document with more than 10-20 rules and they're difficult to know by heart!

Pair programming is not easy to implement, especially when managers see it as doubling costs! That's the biggest obstacle to this method!

For the hell of a goto

Another remark: strongly ban goto's in your coding practices.
If I were to configure a C compiler on a build server, a goto wouldn't compile.
Just for the hell of it! :-)

Conclusion

The best way to put the odds on our side is to combine several methods:

  • Peer coding reviews and/or pair coding,
  • Coding conventions,
  • Static code analysis,
  • Classical tests.

You will be totally in line with sections 5.5, 5.6 and 5.7 of IEC 62304.

The stricter the development process, the less bugs. Hence the software security classes.