Tuesday, April 4, 2017

Lies, Damn Lies, and Traceability Matrices

Beware the Traceability Matrix. It deceives and misleads.

Let's take some requirements for a calculator that works out how much it will cost to park at an airport carpark:

Parking Calculator Application

I've written some test cases for each of these requirements. I've written these test cases in good faith following what proponents of test cases consider to be good practice. Each test case is written as unambiguously as possible; each step has an expected result with clear pass/fail criteria:

Test Cases for Parking Calculator

These test cases, as written, will pass (for a slightly more complex observation of what will happen when these test cases are executed, see this article, but in theory, and in intention, they pass).

As shown above, I've numbered the requirements, and you will see that there's at least one test case for each requirement.  Here is a traceability matrix:

According to the traceability matrix, I have "100% test coverage" and a "100% pass rate".  According to plenty of exit criteria and test plans I've read in my career, I have met the criteria needed to declare that the product is tested and ready to exit testing.

But wait

What about this?

...or these?

A famous game we play in the context-driven testing community is to try and have the calculator calculate the largest cost we can.

The point is though, is that there are problems with the calculator. Problems that people who matter might be interested in.  Problems invisible and masked by a seductively green traceability matrix.

Even worse, many test management tools automatically generate the traceablity matrix as a primary story-telling tool, creating bewitching tables with hypnotising statistics, with no warnings.

You might look at my test cases and, even though I wrote them with honesty and good faith, find flaws in them. Surprise surprise, just like real life. But unlike real life, I'm not giving you a traceability matrix with no context.

Traceability Matrices are like the sirens of mythology: Enchanting, but very dangerous. Never take a traceability matrix at face value.

1 comment:

  1. Nice post Aaron. As much as I dislike almost all testing metrics I do have one use for traceability matrices. Not enough for me to suggest that people use them unless they need to because they are in a regulated environment - but a use nonetheless.
    If I have completed my tested (scripted or exploratory) and I have spent the time to fill in the traceability fields. I might get useful information: if I am expecting 100% coverage and I get anything less. In other words, the requirements traceability coverage (or code coverage) metric can tell me if I have completely missed something. It cannot tell me that I have adequate coverage for any requirement but it can tell me I have no coverage.
    Once the 100% value is reached then the usefulness drops to zero and it is replaced with a dangerous false sense of security.