Adapting Crosby’s 4 absolutes of quality into a software context (2024)

Philip Crosby has a big reputation as being a quality leader in the manufacturing industry, having authored many books on quality between 1968 and 1999.

Some of the well-known work that he’s quoted on regularly include: “quality is free”, “zero defects through prevention”, and his “4 absolutes of quality”.

Although Crosby spoke of these things from his context of manufacturing production line companies, his lessons are often directly ported over to the software industry, without question.

Having been involved in many twitter conversations in the past regarding Crosby’s work, and having read some of his books and previous work, this blog is to relay my thoughts at a deeper level about Crosby’s “4 absolutes of quality” from his book “Quality is free”. The 4 absolutes, to me, underpin the conversations surrounding zero defects through prevention and quality being free.

Adapting Crosby’s 4 absolutes of quality into a software context (1)

(Note: Lots of Crosby’s work is good! This is not a blog to pick holes in Crosby’s work. This blog is purely to highlight how I have used Crosby’s framework and adapted it to fit my own context in working with the software. You may agree with me or disagree with me. And that’s OK! This is just me sharing my knowledge and view of the world of quality as I see it).



The 4 absolutes of quality

Here they are:

  1. Quality is defined as conformance to requirements.
  2. The system for causing quality is prevention, not appraisal.
  3. The performance standard must be Zero Defects.
  4. The measurement of quality is the Price of Non-conformance, not indices.

I’m going to ask that you don’t think of these from a software perspective yet. Think of them from a hardware manufacturing perspective first, as that’s the context in which they were created.

Having worked in a hardware manufacturing context, testing hardware and firmware as well as software for a printer manufacturer, I managed to spend time on the factory floor, seeing the production lines first hand. It gave me a first-hand insight into what Quality Assurance and Quality Control meant from that context. (These are also terms that have been ported across to the software industry that in my opinion don’t fit well within the context of building software). Likewise, my current context at Photobox also has a huge element of production line manufacturing – people use Photobox to order printed gifts, such as different kinds of photo-books, photo prints, canvases, calendars, mugs, cards, phone covers, jigsaw puzzles, and many other kinds of products. Our manufacturing lines are a sight to see – printing thousands of images on to products at pace and packed with care into it’s cleverly designed packaging, ready to be delivered out to the customers, looking to see their happiest moments on their choice of product.

Looking at the 4 absolutes from this perspective of a production line, they seem reasonable.

Now, I’m not talking about hardware design or the conception of a hardware product. I’m talking specifically about the manufacturing line. Building the physical products in line with a set of specifications that fit an exact mould of some kind and put together in specific ways before its packaged up

Hardware products that are manufactured the same way every time, over and over, to a very specific manufactured specification – each physical product created needs to fit that spec. The quality of the manufacturing could be seen aseach and every individual manufactured product fitting that mould. Any deviation from that mould would be considered a defect. And quality control – the act of physically checking a percentage of manufactured products to look for “defects” (i.e. “non-conformance” to the hardware specifications) is a means of discovering deviations and feeding back to resolve problems relating to the production line, be it machinery that’s failing, or people that are making mistakes, or that the production line is trying to move too quickly, etc.

The full idea here is completely tied to the thought that quality is “correctness” within this hardware context. In manufacturing lines where the goal of the line is to build the same products over and over in the exact same way, “correctness” is the lens that people definitely look through.

You can see that out of the 4 absolutes, numbers 1, 3 and 4 somewhat make sense in this kind of context.

Number 2 makes sense when you think about putting effort into thedesign of the production line processes.

Hardware errors are very costly. Finding defects just before shipping a product to customers might seem like a success, but think of the cost of the materials and the time on essentially needing to reset the line to correct the defect and re-run the production line to resolve the problems. So as much defect prevention as humanly possible seems like a sound strategy for hardware manufacturing.

But what about software?

Simply porting the 4 absolutes over to software is problematic.

Software isn’t about defining a product and then setting up a production line for producing the product, over and over and over again to then ship to customers. Software is far more dynamic than that. Software is a thinking challenge, not a building challenge. Software is a conundrum of variables.

We can try and define the needs and wants of a customer, but then in building that software there are hundreds of ways that a developer can choose to write code, using many different tools. On top of that, the complexity of software stretches across the users and customers too. What they want and need is subjective and relative. It’s different for each individual user.

And there are hundreds of variables in how they use the software, what data they use, when they use it, where they use it, how they use it – and this ultimately means there are hundreds of different kinds of product risks that affect the users’ levels of delight or despair in using the software.

With hardware, a product tends to have a specific purpose. Headphones are designed to sit on or in your ear so to translate information into sounds, playing through their manufactured speakers for you to listen to privately. A mug is manufactured to hold hot or cold liquid and has an ambidextrous handle for people to lift and drink the liquid from. A monitor is created specifically to display the information transmitted from a computer or other device, converting the information into pixels and placing them correspondingly to present an image for people to see.

They are built with that single purpose, and people look to use them for that single purpose.

Yes, they have software and firmware within them, which are part of the manufactured product. But the software and firmware isn’t built in the same way that the hardware is manufactured.

Revisiting the 4 absolutes while thinking about software:

  1. Quality is defined as conformance to requirements.
  2. The system for causing quality is prevention, not appraisal.
  3. The performance standard must be Zero Defects.
  4. The measurement of quality is the Price of Non-conformance, not indices.

Absolute #1: Quality is defined as conformance to requirements

This doesn’t work within a software context. Yes, requirements are important – the customer does have wants and needs. We will always have expectations that we need to build the software in line with. However, with the complexities within software, and the variables in how we build software and how the users use the software, there are a host of unknowns and even more unknown unknowns. Unexpectations that are outside of the knowledge of the requirements… So saying that quality equals the conformance to requirements would be ignoring the holistic perspective of the software, encompassing both the requirements and the actuals in its usage, taking the spectrum of knowledge and ignorance (lack of knowledge) into account.

It just doesn’t feel right to me – software quality isn’t about “correctness”. Your software can be working “correctly” based on the requirements, but at the same time can be really horrible to use based on something unexpected that is affecting the users’ experience when using it.

Software quality needs to be seen at a wider perspective relating to“goodness”. Looking beyond the explicit requirements into the realms of product risks and investigation of unknowns, determining the amount of delight or despair the customer will have while using the software. That is to say, the scale of how good (or bad) the software is to use.

Absolute#2: The system for causing quality is prevention, not appraisal.

I do see prevention as a big part of being proactive regarding software quality. You can perform investigative/exploratory testing against the idea of the software solution, then again on the artefacts and UX/UI wireframe designs, and on the architecture design and the code design – the information uncovered will relate to different kinds of: risks, variables, perspectives, ambiguities, properties, purposes, unknowns, etc.

That information can most certainly feed better designs and in particular, better code design, enabling the prevention of many of these risks from manifesting into problems within the software. Awareness of the risks and unknowns is what makes prevention possible, and investigative/exploratorytesting is the perfect approach to uncover and breed awareness of these things throughout the early activities of the SDLC, before any code is written.

However, in saying that, it is impossible to think of every risk and variable. We are human. Ignorance still applies – there are unknowns that we will remain unaware of, and hence we wouldn’t have enabled the prevention of these things through better design.

Prevention is one part of the causation of software quality. And hence the investigative testing activities remain necessary for testing the code that is written and testing the operational software too. The output of the testing here is more related to problems – yes, we discover risks at this point in the SDLC too, but once we do, we can then conduct further investigation of those risks to determine if they are actually problems.

Absolute#3: The performance standard must be Zero Defects.

Thinking about the psychology of unknowns, and the fact that there are unknowns that we are unaware of (and therefore can’t test or design for), this means that this statement is a bit moot.

I think “zero defects” in the context of software should take on a different meaning –“zeroknowndefects”. That is to say, that if a defect (or problem) is discovered, then it should be fixed. If you know of it, then strive to fix it. But this also recognises that we won’t know about every defect, since we can’t think of everything – there will be unknowns that we remain unaware of. Additionally, if you choosenotto fix the bug, then just close it.

Personally, for me, I like to look at the performance standard for software quality to be related to risks – risks discovered, risks mitigated through design, risks investigated and risks that have manifested into problems. This can be qualitative or quantitative, but ultimately builds a picture of confidence relating to our perception of quality.

Absolute#4: The measurement of quality is the Price of Non-conformance, not indices.

To me, it’s implying problems that relate to “correctness” (or specifically, the software being incorrect relating to the requirements). But as I’ve already mentioned, there is so much beyond requirements. Plus – requirements can be wrong themselves, or misinterpreted if the requirement is ambiguous (it’s like the “Mary had a little lamb” statement – did Mary have a pet lamb? Or did she eat a little bit of lamb for her dinner?… The statement could be interpreted either way).

So again, for me, #4 is part of the picture, but isn’t the whole picture. We do need to look at the scale of correctness/incorrectness as we will have expectations set from requirements. But equally, we need to look at that scale of goodness/badness, looking holistically at the software and the ideas of the software solution, beyond just the requirements.

But measuring the quality of software is hard. As I mentioned before, the software is subjective and relative. And therefore quality is also subjective and relative. As are bugs/defects/problems. So any kind of measurement of quality will be representational from the perspective of the team or individual that is investigating the quality of the software.

Additionally, we can use data from the software being used by the users within their own context too. This will give us insights into how they actually find using the software, which is great! However, it should be recognised that this has business risks if you choose to release software to the users where you don’t have any perception of the quality of the software yourself before releasing – the software could be horrible to use at a basic level, meaning your users will have increased levels of despair in using your software, and this could affect your reputation as a business, or your company margins, if those users choose to turn to a competitor instead due to their experience.

My preference and advice would be to use a blended approach – you need data from the users using the software, but you also want to have an understanding of quality from your own perspective too.

So… For these reasons, this is why I feel that Philip Crosby’s 4 absolutes don’t work well in the context of software. Blindly pulling them into a software context without any thought or consideration will actually have negative consequences, by causing misconceptions within the software industry.

My adaptations of the 4 absolutes of quality, for working in a software context

With all of the above in mind, I’d like to propose some changes to the 4 absolutes if you are using them in a software context:

  1. Quality is defined as “correctness”and“goodness” in relation to stakeholder value.
  2. The system for causing quality is prevention and detection, and subsequent action.
  3. One performance standard is “zeroknowndefects”, and others are “risk discovery” and “risk mitigation”.
  4. The measurement of quality is subjective, relative and personal, and therefore relates to confidence in perceived quality.

Ending on a positive note

Crosby did do a lot for the hardware manufacturing industry as a promoter of quality, so I will leave you with a quote from Crosby which I think is a really important message that does apply to software, as much as any industry:

“Quality is the result of a carefully constructed cultural environment. It has to be the fabric of the organisation, not part of the fabric.”- Philip Crosby

I like this quote as it resonates with how I believe companies should think about this from an organisational cultural level, rather than quality being an add-on to think about after something is built.

What are your impressions of Crosby’s “4 absolutes of quality”? What are your opinions on them being used in a software context? Join the discussion and leave a comment below, but please remember to be respectful in your comments. Thanks!

Adapting Crosby’s 4 absolutes of quality into a software context (2024)
Top Articles
Latest Posts
Article information

Author: Kelle Weber

Last Updated:

Views: 6271

Rating: 4.2 / 5 (53 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Kelle Weber

Birthday: 2000-08-05

Address: 6796 Juan Square, Markfort, MN 58988

Phone: +8215934114615

Job: Hospitality Director

Hobby: tabletop games, Foreign language learning, Leather crafting, Horseback riding, Swimming, Knapping, Handball

Introduction: My name is Kelle Weber, I am a magnificent, enchanting, fair, joyous, light, determined, joyous person who loves writing and wants to share my knowledge and understanding with you.