Sunday, October 2, 2016

An Anecdote and a (small) Editorial - The man who opened the door to testing for me

I worry about posting this. I'm worried I'm going to be told to 'check my privilege'. I'm worried I'm going to be told my perspective isn't important. But to add a positive experience isn't to deny that bad experiences happen. I also don't like getting involved in Twitter dramas, but as an opinionated observer, I find their pull irresistible at times. Especially when they involve people I consider to be friends.

There are anecdotes of people leaving the testing field because of James Bach. I'm not denying this has happened. But if we're collecting anecdotes, I'd like to add mine.

An Anecdote

Towards the end of 2009, I had been testing for about two years and I had no public profile. The kind of testing I was doing I would describe now as 'exploratory testing in an agile environment.' I didn't know those words back then though. Describing what I did with the words I knew back I would say "I was doing ad hoc testing in an unstructured environment". I had done ISTQB, gone to local meetups, and interacted with other testers who worked for big corporations. They talked about things like "requirements" and "test scripts" and "traceability matrices". I'd lie in bed at night worrying about what a fraud I was being. You see, I was doing things like "talking to the developers" and "asking them what the most important things to test were." When I was done, I'd report back to them. Not in a test case management tool; I'd walk to their desk and talk to them. Sometimes I'd be as formal as writing an email, or a two-page summary document. I tried testing 'properly' once, but it didn't make much sense to me. I was obviously just not 'getting it' which added to my anxiety.

The STANZ (Software Testing Australia New Zealand) Conference 2009 came around and I begged my employer to go as I was desperate to learn how to 'do testing properly'. One of the speakers was James Bach. Oh he's an expert in the field! His talk was called "Becoming a Software Testing Expert".


In his talk, he talked about "The Tester's Mindset". There was lots of talk about thinking critically, on how to identify threats to value, establishing context. It was immensely useful, and really helped give structure to what I was doing, and let me think critically about my testing and how to improve it.

But it still didn't scratch the itch I had.

So I plucked up the courage to ask him a question after his talk.

"I really enjoyed your talk, and it talked a lot about the kind of testing I'm doing, but, umm.... what if I want to learn how to do testing by the book?"

"Why would you want to do testing by the book?" he boomed back in his American accent. "The book is wrong!"

"I wrote a book about testing, and noone ever checked to see if I knew what I was talking about."

He continued talking to me, right through the break time. The next session was about to start, so he started rummaging through his bag. He said "It looks like we've got to go..." He found what it was he was looking for. "But here, you should read this book. You can have it."

"Exploring Science" by David Klahr. The book James gave me as a faceless attendee at a conference in a remote part of the world

The words "The book is wrong" was the key that opened the software testing door to me. Being gifted a book as a nobody by a gargantuan in the industry was what pushed me through it.

Since then, I've interacted with James countless times:

I've engaged in coaching sessions with him, which he volunteers his time to give (

The first time I spent one-on-one time in person with him was when he volunteered his time to help Brian Osman set up and be content owner of the first Kiwi Workshop on Software Testing.

The first time I collaborated with James was when he volunteered to co-write an article for Testing Trapeze magazine, and I offered to co-write with him.

These were all 'nice' things for him to do. They came from a place of authenticity.

James can be difficult to interact with, because he doesn't pretend to be nice. I find him intimidating. I have to think carefully about what I say around him. I have to be careful with my ideas. If I say something, I have to defend and justify it. But that's ok. That's who James is, and that's the deal I make when I engage with him. Through this all, I have learnt courage. I have learnt how to be my own biggest critic. Yes, I also suffered from baby tiger syndrome, but I also learnt and grew from that too.

In short, interacting with James is difficult, but worthwhile and I owe much of my career and success so far to James Bach.

That's the anecdote to add to the pile.

A (small) Editorial

I don't think James is a bad person. I don't think James is a malicious person. I do think James is unusual in that he values integrity over manners; honesty over niceness.

I said in the anecdote that I owe much of my career and success so far to James Bach. I think most of the people who would read this have benefited from the work James has done directly, or indirectly, consciously, or not. In fact, I can't imagine what the testing profession would look like if not for his influence on it.

But James is double edged sword, and it frustrates me when I read about people being hurt by his blade, when they don't seem to deserve it. I don't know what to do about that - his sharpness has helped so many; his contribution to the industry immeasurable. But his sharpness has undeniably hurt some people that probably didn't deserve to get hurt. Dulling the blade or sheathing the sword means losing a lot.

Embracing diversity means challenging what is 'usual' and working out how to work with 'unusual'. If we strive to embrace diversity, then we should acknowledge diversity of personality, and diversity of temperaments, and work out how to work with that.

Friday, February 5, 2016

Lean Testing in theory and practice

This article was originally published in an earlier form on the Assurity Consulting website

There are many different definitions of software testing, and many views on what responsible testing looks like in our industry.  How you view the role of a tester informs what practices and artifacts you believe are valuable.

Saturday, June 6, 2015

Resources relating to "That's not the map I had in mind"

Resources for "That's not the map I had in mind"

I expect that these diagrams will evolve and grow over time which is why I have included them here for comment. This list will grow over time.

Xmind file for taxonomic hierarchy: XMind file

JPG file for taxonomic hierarchy: Image

 Downloadable examples of various testing models coming soon

Tuesday, December 2, 2014

WeTest Weekend Workshops 2014 theme: Evolve

This last weekend (29/11/2014), we had our second WeTest Weekend Workshops 2014. The theme was "Evolve".

Wednesday, November 5, 2014

Shallow KPIs: A Tale of Two Testers

Once upon a time there was a thread on linked in on KPIs for software testers.  A Test Manager shared the KPIs she uses for her team:

1. Amount of bugs created.
2. Amount of bugs verified
3. Amount of assigned work completed.
4. Confirm to schedule. 

(At the risk of accusing anyone on Linkedin of being sloppy with their language, I will assume by 'Amount of bugs created' she means "number of bugs logged in some bug tracking tool").

When challenged, she provided the following 'real life' scenario as if it the sheer power of this example would dazzle us all into submission:

"Tester1 – found 30 defects, verified all assigned issues by deadline.
Tester2- found 0 defects, verified 10% of issues assigned by deadline.
Who performed better Tester1 or Tester2?"

So who performed better?

Tester 2 of course. She didn't log any defects because she had established a strong working relationship with the development team, and as she found an issue, she wrote it on a sticky note, and gave it to the developer. The developer then would rapidly fix and redeploy. The tester would retest, and verify the fix. Because of this, she was able to reduce a lot of administrative overhead, and help the developers produce a high quality product.
Tester 2 was unable to verify all the issues assigned to her by the deadline because she was very thorough, and felt that meeting an arbitrary deadline didn't contribute to the overall health of the project. Instead, she focused on doing great work.

Meanwhile, Tester1 logged many defects. They were poorly written, and many of them were just different symptoms of the same underlying issue. The developers had to spend a lot of time trying to decipher them, and would often spend many hours chasing down bugs that turned out to be merely configuration errors. Once, he logged 10 'defects' that were immediately 'fixed' when someone came over and updated his java environment. A lot of time is spend administering the defects in the bug tracking tool, and trying to work out if Tester 1's defects are legitimate or not.
Tester 1 works very hard to meet the deadline when verifying issues. To do so, he performs a very shallow confirmatory check. His vulnerability to confirmation bias has led him to verify many fixes as "complete" when there were regressive side-effects he didn't pick up on.

Tester 1 meets his KPIs and is up for promotion. In two years he'll be sharing his wisdom in Linkedin.

Tester 2 has been told she isn't performing as necessary. She is going home tonight to update her resume. In a year she'll be working at a company that assesses her performance by watching her work and regularly catching up for peer review. In two years she'll be sharing her wisdom at a peer conference.

Friday, September 19, 2014

The Responsibilities of a Conference Facilitator

I have just returned from Let's Test Oz 2014, and like the CAST conferences, operates on a K-Card style facilitation format.

During the three day conference I saw the power a great facilitator can have. I got to experience first-hand the influence a good facilitator can have on the success of a talk, so I would like to offer my perspectives on what makes a good faciliator.

Thursday, August 21, 2014

Very Short Blog Post: A date with test cases.

Here's a test case problem:

The requirement:

"Formatting is automatically applied to all date fields (dd/mm/yy formatted)"
Here are my findings after a 15 minute test session:
  • Formatting is automatically applied when entering dates as
  • 12.12.2014
  • 12th Dec 2014
  • 12 Dec 2014
  • 12 December 2014
  • 12th December 2014
  • 12-12-2014
  • 12-DEC-2014
  •   Formatting is not applied to:
  • 12.12.14
  • 12122014
  • 12th Dec 14
  • 12th December 14
  • 12/12/14

a) Did the requirement 'pass'?
b) According to some claims, it is best practice to write one positive test case and one negative test case per requirement. What would I have learned by writing and executing two test cases?
c) Some test management tools would report 100% coverage with 1 test case and if it passed, it would say that the requirement passed.
Maybe talking about testing in terms of test cases and of passes and fails isn't useful.