DEV Community

Discussion on: Safety-Critical Software: 15 things every developer should know

Collapse
 
dmfay profile image
Dian Fay • Edited

Trevor Kletz devotes chapter 20 of What Went Wrong? Case Histories of Process Plant Disasters to "problems with computer control". It has some good general points:

Computer hardware is similar to other hardware. Once initial faults have been removed and before wear becomes significant, failure can be considered random and treated probabilistically. In contrast, failure of software is systemic. Once a fault is present, it will always produce the same result when the right conditions arise, wherever and whenever that piece of software is used.

(discussing a computer-enabled methanol spill) A thorough hazop [hazard and operability study] would have revealed that this error could have occurred. The control system could have been modified, or better still, separate lines could have been installed for the various different movements, thus greatly reducing the opportunities for error. The incident shows how easily errors in complex systems can be overlooked if the system is not thoroughly analyzed. In addition, it illustrates the paradox that we are very willing to spend money on complexity but are less willing to spend it on simplicity. Yet the simpler solution, independent lines (actually installed after the spillage), makes errors much less likely and may not be more expensive if lifetime costs are considered. Control systems need regular testing and maintenance, which roughly doubles their lifetime cost (even after discounting), while extra pipelines involve little extra operating cost.

Computers do not introduce new errors, but they can provide new opportunities for making old errors; they allow us to make more errors faster than ever before. Incidents will occur on any plant if we do not check readings from time to time or if instructions do not allow for foreseeable failures of equipment.

Collapse
 
bosepchuk profile image
Blaine Osepchuk

Good points. Thanks for sharing.

The belief that all software errors are systemic appears to be outdated.

I read Embedded Software Development for Safety-Critical Systems by Chris Hobbs as part of my research for this post. And he writes extensively about heisenbugs, which are bugs caused by subtle timing errors, memory corruption, etc. In fact, he shared a simple 15 line C program in his book that crashes once or twice every few million times it's run.

In the multicore, out-of-order executing, distributed computing era, systems aren't nearly as deterministic as they used to be.

Collapse
 
dmfay profile image
Dian Fay • Edited

They're exactly as deterministic as they used to be! What Went Wrong's first edition dates to 1998 -- at that point hardware and software engineers had been dealing with race conditions, scheduling issues, and the like for decades, although Kletz doesn't get into the gory details as he's writing for process engineers rather than software developers. Computer systems have not become non-deterministic (barring maybe the quantum stuff, which I know nothing about); rather, they've become so complex that working out the conditions or classes of conditions under which an error occurs tests the limits of human analytical capacity. From our perspective, this can look a lot like nondeterministic behavior, but that's on us, not the systems.

Thread Thread
 
bosepchuk profile image
Blaine Osepchuk

Isn't what you are saying effectively amount to non-determinism? If your safety-critical product crashes dangerously once every million hours of operation on average for reasons you can't explain or reproduce, no matter how hard you try, isn't it hard to say that it's a systemic error for all practical purposes?

This really isn't my area of expertise by the way. Chris Hobbs explains what he means in this YouTube talk.