Operations Reviews, or ops reviews, are common management tools in Support organizations. If you’re going to measure something, goes the logic, it’s best to act on those measurements. Making course corrections sooner means that things get fixed sooner, with less customer impact. Bad habits don’t have time to take root. Quick, decisive action at a weekly ops review can keep the ship on an even keel.
As with most of these myths, there’s a kernel of truth. It’s important to stay ahead of problems. The problem is a matter of execution. And, in fact, human nature makes it very, very hard for weekly ops reviews to be conducted productively.
The discipline of behavioral economics has taught us a great deal about the biases that humans exhibit. Unlike the rationally maximizing human beings who play the starring role in most economics textbooks, real humans make decisions using many mental shortcuts—shortcuts that provided evolutionarily advantage. If your ancestors had performed a careful, logical analysis when saber-tooth tigers prepared to pounce on them, you wouldn’t be here. But because we unwittingly fall back on these mental shortcuts, we’re not always as rational as we might be. (For a comprehensive and wonderful treatment of this, read Thinking, Fast and Slow by Daniel Kahneman, cofounder of Behavioral Economics; for a faster, thoroughly enjoyable version, try Predictably Irrational by Dan Ariely.)
What do saber-tooth tigers and behavioral economics have to do with weekly ops reviews? Well, when humans are confronted with data, they can’t avoid finding patterns and explanations—even if variations in the data are 100% consistent with chance. (One last book recommendation: The Drunkard’s Walk by Leonard Mlodinow lays out many examples where even mathematically sophisticated people can’t help but believe in explanations that don’t exist.)
This bias towards pattern recognition turns ops reviews into the Theater of the Absurd. We have all this data, so we need to perform analysis and take actions! Page views increase, and some minor change in the website UI is given credit. CSAT goes down, and managers are called on the carpet and demanded to explain themselves. Culprits are identified and chastised. Yet, rarely is there someone in the room with the statistical acumen, the fortitude, and the mandate to say, “hey, but maybe that’s just a normal variation.”
It’s not just any patterns we find, either. “Confirmation bias” means we find the patterns we’re looking for. (If you need proof of this, just listen to two people at opposite ends of the political spectrum explain how a particular news report supports their positions.) If I change the website’s color, and knowledgebase page views go up the following week, is there any doubt in my mind where the credit lies? This doesn’t make me a bad human; it just makes me human.
Also, a statistical curiosity called regression to the mean is responsible for reinforcing mistaken notions of cause and effect in ops reviews. If my metrics are atypically bad, by chance, they’re likely to be more typical—in other words, better—next week. This is true whether or not I get yelled at…but it certainly makes yelling at me look like a highly effective strategy!
I know of some businesses where the weekly ops review is anticipated much as a visit to the dentist: you can be pretty sure there will be pain somewhere in the office, and you mostly hope it’s inflicted on someone else.
So, what’s the alternative?
- In a normal environment, there’s no reason to have a full-cast review of operational metrics more than once a month. This at least reduces the amount of metrics “jitter” and makes the signal stronger relative to the noise. Two caveats: Note that individual operations analysts and managers may scan the metrics more frequently to look for large unexpected movements that may require interventions. And during periods of rapid change, such as a short intense busy season or while rolling out new initiatives, more frequent focused meetings are appropriate.
- Be aware of the natural tendency to assign cause where a cause may not exist. Be especially aware of conclusions that confirm your own instincts, or that explain a return to normalcy. Confirmation bias and regression to the mean may be in play. It’s not a bad idea to find someone who can calculate confidence intervals on your data: you may be surprised how little of the variation that your business experiences is statistically significant.
- Finally, remember that the purpose of metrics is to inform the conversation and to aid learning…not to reward and punish. As Dean Spitzer points out, the effectiveness of metrics as learning tools is directly related to the health of the culture of measurement. And nothing damages the culture of measurement more quickly than beating people up based on their numbers.
“The beatings will continue until morale improves” is a funny t-shirt, but it’s no fun in real life. Keep your ops reviews grounded in reality, positive, and informative.