Thursday, March 31, 2011

Putting systems into System Testing

System - A group of interacting, interrelated, or interdependent elements forming a complex whole.  - http://www.thefreedictionary.com/system

People mean different things when they call themselves testers, and people think different things when they think about what it means to test something.  People generally distinguish between unit testing, integration testing, system testing, and user acceptance testing.  But what does system testing actually mean?  Why is it, when we're "taught" about system testing, noone ever explains what a system is?


I was introduced to the idea of general systems thinking a couple of years ago, and fell in love with it, even though I didn't quite understand what it was I was falling in love with.  I still don't quite understand it, but I'd like to have a go putting the general systems thinking approach into system testing.  I think we can take ownership of the term so that if you say "I'm a system tester", people won't confuse you with an automater, or a scripter.

Gerald Weinberg offers this diagram to illustrate different types of systems with respect to methods of thinking:
from Figure 1.9 'An Introduction to General Systems Thinking' Gerald M Weinberg

















We can use statistics for solving problems in region II, reductionism for solving problems in region I, but region III is a tricky region.  It's too complex for reductionism, and too organised for statistics.  This is where general systems thinking comes in.


While some testers may focus on unit testing, or integration testing, and most testing literature ignores everything else but the software what is the role of a system tester when taking an expanded look at the meaning of a system?  How can we apply systems thinking to this?


Well, first thing we can do is try to define software out of region III and into one of the other regions, where there are already tools we can use.  The Factory and analytical school already try to do this by reducing software to a set of stated requirements, and then reducing testing to the task of verifying those requirements.  They attempt to push software testing down into region I, reducing testing to a set of components that can be individually tested while the agile school try and push software up into region II by dividing software up into a structureless mass of equal interchangeable parts, ie stories.  The goal of agile testing is to determine when programming of a particular story is complete.  The whole is considered done when the constituent parts are done, and each part is considered as equal as any other part.

Here is ISTQB trying to define itself out of region III and into region I:
System Testing: The process of testing an integrated system to verify that it meets specified requirements. - Standard glossary of terms used in Software Testing Version 1.3 (dd. May, 31st 2007)
Adapted from Figure 1.9 'An Introduction to General Systems Thinking' Gerald M Weinberg

















Another issue is that testing text books, and courses such as ISTQB, give examples of problems that reside in region I.  They then show reductive methods that work beautifully with region I problems, and then expect you to use those techniques in region III.

But we're just deceiving ourselves, and those around us, by trying to sweep software into regions I and II.  We're lulling ourselves into a false sense of security that we say we have everything under control because "100% of our stories' automated tests pass" or "We can trace every test back to a requirement in the requirements document.  We have 100% traceability!"

It also means we have a scapegoat when things go wrong: "Noone could have foreseen that" or "it wasn't in the requirements".  This isn't good enough.

So, if we venture out into the frontier of region III, what do things begin to look like? 
Software is not only a system in itself, but software is the product of a system (the dev team) for a system (the client) to be a component in yet a larger system (the client's world).

Therefore software development: "A group of people who come together and help solve a problem for another group of people via the medium of computer software." 

Requirements:   The gap between what the customer desires, and what they have.  An emergent property of the relationship between the customers and their world. This means, to understand the requirements (vs what's written in a requirements document) we must understand the customers, AND the world in which they live in.  This also implies that as the customer's world changes, so do their requirements.

Requirements Document: One person's attempt to capture the above property of the customer's world via the medium of written language.

Bug: The gap between what the user desires, and what the user gets.  To understand if something's a bug, you must understand the customer.


A systems tester understands the whole is greater than the sum of its parts.  We cannot understand the whole by analysing the parts.  All parts can individually pass, but the system as a whole can fail.  The system can "pass" today, but "fail" tomorrow as the world changes.

A systems tester understands that source code has no intrinsic value.  If there's something 'wrong' with the source code, it's only "wrong" in so far as a user experiences something they shouldn't.  Supposing the source code were hard to understand or maintain?  Then there is the potential for programmers (who are also users of the code) to experience headaches (which they shouldn't) and it increases the potential for code to be written or omitted which could cause users to experience things they shouldn't.

A systems tester understands that, just as the perfect transister is useless if the solder holding it to the board is defective, it doesn't matter how strong each team member is individually, if the connections between members is weak, the organism as a whole is weak.

OK, that's my first pass at trying to get my head into applying general systems theory to software testing.

It would be nice to be able to say "I'm a system tester" and have people understand what it means to think in systems, and what life is like in region III.

In a future post I'll try to illustrate how other schools of software testing are using region I and region II techniques to solve region III problems, and then, hopefully, show how the context-driven school uses region III techniques to solve region III problems.

4 comments:

  1. He Aaron,
    I like the way you explained about the dangerous GAP by moving the system in I and II. I have to confirm me too saw people selling these false secure feeling everything was tested because the met the requirements. Even after requirements were reviewed and still issues were found in productive system.

    I think you might be onto something here.
    kepe writing about it
    Cheers
    Jeroen

    ReplyDelete
  2. Hi Aaron,

    Very insightful post. I hate reductionism as a means of trying to understand how something works. Its like telling someone that the only way to understand how the brain works is to have a look at the SEM photo of a neuron.

    We are and the systems we develop/test greater than the sum total of our parts. Context is an intrinsic part of what I do as a tester. Yes I can say it works, Yes I can say its robust, Yes I can say its performant, Yes I can say it's not what the customer wants. Unfortunately the last piece always gets redacted when it comes to the decision making part of the process :(

    Keep going with this, I like your reasoning.

    Ivor

    ReplyDelete
  3. Great post Aaron, your blog is one to watch!

    I mirror what Jeroen and Ivor have said, and I'm really looking forward to the follow up.

    Thanks for sharing.

    ReplyDelete
  4. Nice post Aaron. I like the following; I can see myself paraphrasing it in future (if you don't mind):

    "Another issue is that testing text books, and courses such as ISTQB, give examples of problems that reside in region I. They then show reductive methods that work beautifully with region I problems, and then expect you to use those techniques in region III.

    But we're just deceiving ourselves, and those around us, by trying to sweep software into regions I and II. We're lulling ourselves into a false sense of security that we say we have everything under control because "100% of our stories' automated tests pass" or "We can trace every test back to a requirement in the requirements document. We have 100% traceability!"

    It also means we have a scapegoat when things go wrong: "Noone could have foreseen that" or "it wasn't in the requirements". This isn't good enough.

    So, if we venture out into the frontier of region III, what do things begin to look like?
    Software is not only a system in itself, but software is the product of a system (the dev team) for a system (the client) to be a component in yet a larger system (the client's world)."


    My advice may still fall upon deaf ears but I think you're on to something :)

    ReplyDelete