Quality, said Crosby, is conformance to requirement.
Quality, say others, is fitness for purpose, whatever that is.
At least Crosby was clear. When you test or otherwise verify your product or system, it passes if it does what the requirements say, and fails if not.
The only small problem is, that doesn’t measure quality – or at least, it would only do so if the requirements were complete and correct and up to date, and all stakeholders agreed so.
Well, as you know, requirements are always complete, correct, and up to date, and pigs have V12 Rolls-Royce Merlin XX engines, running on 100 octane fuel and rated at 1175 Horsepower at 20,000 feet, making them exceptionally swift and agile fliers.
Swift and Agile Fliers
No, it won’t do. Of course conformance to requirement is necessary, even if the requirements are admittedly not perfect. Without conformance, the system will be even worse than the requirements.
Heck, configuration management is necessary too, but a project with perfect CM and perfect conformance can still have hopelessly poor requirements.
So, is quality fitness for purpose, then? Obviously, a good system is fit for purpose and a bad one isn’t. Case proven? But unfortunately this doesn’t get us anywhere. “Good” and “fit for purpose” are just synonyms for generally satisfactory when it comes to systems. They tell us nothing about how to measure, let alone usefully predict, whether a system is of good quality.
In a vague and woolly sort of way, we intuitively feel that we can measure fitness for purpose. Don’t we just try the system out and see how it performs? Whether it achieves the, er, stated results, and, erm, makes its users happy?
But achieving results is just conformance (see Crosby, above), which depends on having the requirements right.
And user satisfaction is fickle, depending on training, experience, mood, and what else is available on the market (or for that matter, how good the user interfaces are on the user’s telly, car, and home PC). It is not entirely unmeasurable, but what you really need to do to get a reliable measurement is to measure one definite task at a time, in a usability laboratory or whatever. And to do that, you need to know exactly what tasks the system is supposed to help people accomplish.
So, the trouble with quality is this:
See, here’s a standard for making toys safe: they have to be fire-resistant and free from PCBs and Bisphenol A.
Now, software, let me see, it has to be safe, and reliable, and quick, and easy to use, and, um, demonstrate conformance, and, er, be fit for purpose … capability … mumble … functionality … No, scrub that last bit, it has to be functionally effective. No, functionally appropriate. There! That will do nicely!
No, I’m afraid not. The circle can’t be squared, except by bringing people and products together, and seeing what happens. You can do this early – with prototypes and early working versions – or late, with finished products covered in conformance certificates. But the real test is whether people manage to get the job done, to their own satisfaction, using the products.
The quality people and standards writers are right. Quality matters. Conformance is important - it's a key precondition for good systems, just as effective configuration management is. But don't let anyone fool you into thinking it's the same as quality, because it isn't.
Ian Alexander
Books you may like: