In an earlier article, I wrote about how competition and bonuses encourage cheating and waste. I wrote a personal example, where I eperienced that I cheated and wasted resources during a competition that didn’t even matter to me. I got a lot of positive feedback, including:

Not new, but a nice example of why metrics are bad: http://devteams.at/competition_and_cheating by @dtanzer Jens Schauder (@jensschauder)

And Jens is absolutely right: This is not new. It is actually fairly old. But time and again, I work with customers who have implemented or want to implement a system of bonuses to motivate their employees. And when I talk to them, I often get answer like:

“Well, you have a point. But you just have to find the right combination of metrics so that cheating or wasting resources is not possible anymore.”

Ok… But that’s incredibly hard. Let’s assume Louise Elliott would have given us the following challenge: “The side where everyone has a card wins, but there must not be any wasted cards”.

My cheat would still have been possible: I could still just declare we are finished, and hope that the counting occurs only when everyone really has a card. I could even cause some chaos to delay the counting. Wasting cards would also have been possible - It would be next to impossible to determine that not a single card has been wasted.

But that’s an artificial example. Let’s look at something real companies are doing. Louise mentioned a company that had a KPI for testers like “less then x defects found in user acceptance testing”. The idea is that testers should find the defects before going to UAT, so if there are no defects in UAT, they did a good job and deserve a bonus.

What you’ve done now is to incentivize testers to waste time: They need to be absolutely, positively be sure that there are no defects. So they’ll probably spend much more time than necessary testing features, just to be sure. Wasting time has another positive side effect: Less features make it into UAT, which makes defects less likely.

Some time ago, I talked to a manager about a similar situation. And he told me: “Of course you are right. That’s why we also need a KPI that says average testing time per feature must go down every year”.

Your testers could now just reject all the hard to test features very quickly. Find some minor bug in the feature or the specification, and reject it. This increases the lead time for the hard-to-test stuff, and so more easy-to-test stuff goes into every release. Less features in the UAT, and they all are easy to test: Less defects! Of course, your product still suffers. So we need another KPI…

I guess there might actually be the perfect set of KPIs - Maybe it is possible to find a combination of metrics where cheating and wasting resources becomes so hard that people won’t even try anymore. But if you find that set, you are still screwed. In one of my next blog posts, I’ll tell you why: Stay tuned! Subscribe to my Newsletter so you don’t miss the third part of this mini-series…

P.S.: Did you know that you can ask me anything?