Once upon a time there was a thread on linked in on KPIs for software testers. A Test Manager shared the KPIs she uses for her team:
1. Amount of bugs created.
2. Amount of bugs verified
3. Amount of assigned work completed.
4. Confirm to schedule.
(At the risk of accusing anyone on Linkedin of being sloppy with their language, I will assume by 'Amount of bugs created' she means "number of bugs logged in some bug tracking tool").
When challenged, she provided the following 'real life' scenario as if it the sheer power of this example would dazzle us all into submission:
"Tester1 – found 30 defects, verified all assigned issues by deadline.
Tester2- found 0 defects, verified 10% of issues assigned by deadline.
Who performed better Tester1 or Tester2?"
So who performed better?
Tester 2 of course. She didn't log any defects because she had established a
strong working relationship with the development team, and as she found
an issue, she wrote it on a sticky note, and gave it to the developer.
The developer then would rapidly fix and redeploy. The tester would
retest, and verify the fix. Because of this, she was able to reduce a
lot of administrative overhead, and help the developers produce a high
quality product.
Tester 2 was unable to verify all the issues assigned to her by the
deadline because she was very thorough, and felt that meeting an
arbitrary deadline didn't contribute to the overall health of the
project. Instead, she focused on doing great work.
Meanwhile, Tester1 logged many defects. They were poorly written, and
many of them were just different symptoms of the same underlying issue.
The developers had to spend a lot of time trying to decipher them, and
would often spend many hours chasing down bugs that turned out to be
merely configuration errors. Once, he logged 10 'defects' that were
immediately 'fixed' when someone came over and updated his java
environment. A lot of time is spend administering the defects in the bug
tracking tool, and trying to work out if Tester 1's defects are
legitimate or not.
Tester 1 works very hard to meet the deadline when verifying issues. To
do so, he performs a very shallow confirmatory check. His vulnerability
to confirmation bias has led him to verify many fixes as "complete" when
there were regressive side-effects he didn't pick up on.
Tester 1 meets his KPIs and is up for promotion. In two years he'll be sharing his wisdom in Linkedin.
Tester 2 has been told she isn't performing as
necessary. She is going home tonight to update her resume. In a year she'll be working at a company that assesses her performance by watching her work and regularly catching up for peer review. In two years she'll be sharing her wisdom at a peer conference.