The rule is simple: be careful what you measure
Simon Caulkin, management editor
Sunday February 10 2008
Article history ·
Contact the Business email@example.com
Report errors or inaccuracies: firstname.lastname@example.org
Letters for publication should be sent to: email@example.com
If you need help using the site: firstname.lastname@example.org
Call the main Guardian and Observer switchboard: +44 (0)20 7278 2332
License/buy our content
About this articleClose
This article appeared in the Observer on Sunday February 10 2008 on p8 of the Business news & features section. It was last updated at 00:11 on February 10 2008.
If there's one management platitude that should have been throttled at birth, it's 'what gets measured gets managed'. It's not that it's not true - it is - but it is often misunderstood, with disastrous consequences.
The full proposition is: 'What gets measured gets managed - even when it's pointless to measure and manage it, and even if it harms the purpose of the organisation to do so.' In the truncated version, there are two lethal pitfalls. The first is the implication that management is only about measuring. Way back in 1956, the academic V F Ridgway famously noted the dysfunctional consequences of managers' tendency to reduce as many as possible of their concerns to numbers.
Quality guru W Edwards Deming went further, putting 'management by use only of visible figures, with little consideration of figures that are unknown or unknowable' at No 5 in his list of seven deadly management diseases. Henry Mintzberg, the sanest of management educators, proposed that starting 'from the premise that we can't measure what matters' gives managers the best chance of realistically facing up to their challenge.
In the past few years, of course, spurious measurement has proliferated beyond, well, measure. Although regulation comes into it, this is often the consequence of IT systems that can measure anything that moves - the number of telephone rings, how long calls take and cost and how many calls a person makes an hour, for instance. The figures can be on a manager's desk the same evening.
But just because you can measure it, doesn't mean you should. All the above, although standard in call centres, are generally pointless, because they only tell you about levels of activity, not about how well the call centre's purpose is being achieved. Thus, a call centre may boast high productivity and low costs per call but that's irrelevant if most of its activity is mopping up customer complaints about poor service. Activity measures prevent managers from seeing that cheaper calls aren't the answer: better to improve the service so that they don't need a call centre with all its associated costs in the first place.
It gets worse when activity measures form the basis of contracts with suppliers, as they often do in the hard-nosed-sounding guise of 'payment by results'. Payment by results, whether for calls answered, appointments made or patients seen, is actually 'payment by activity' - activity that doesn't necessarily advance and may actually obstruct the overall purpose. As in the call-centre example above, a contractor paid by 'results' (ie activity) has no incentive to improve service, which would reduce the number of calls, and hence payment, and every incentive to worsen it, by cutting the time spent on calls as they inexorably increase in number.
Here we encounter the second problem with the measurement-management equation. All too often in a kind of Gresham's law (which said bad money drives out good), the easy-to-measure drives out the hard, even when the latter is more important. Strategy writer Igor Ansoff said: 'Corporate managers start off trying to manage what they want, and finish up wanting what they can measure.'
What happens when bad measures drive out good is strikingly described in an article in the current Economic Journal. Investigating the effects of competition in the NHS, Carol Propper and her colleagues made an extraordinary discovery. Under competition, hospitals improved their patient waiting times. At the same time, the death-rate following emergency heart-attack admissions substantially increased. Why? As targets, waiting times were and are measured (and what gets measured gets managed, right?). Emergency heart-attack deaths were not tracked and therefore not managed. Even though no one would argue that the trade-off - shorter waiting times but more deaths - was anything but a travesty of NHS purpose, that's what the choice of measure produced.
As the paper observes: 'It seems unlikely that hospitals deliberately set out to decrease survival rates. What is more likely is that in response to competitive pressures on costs, hospitals cut services that affected [heart-attack] mortality rates, which were unobserved, in order to increase other activities which buyers could better observe.'
In other words, what gets measured, matters. Measures set up incentives that drive people's behaviour. And woe to the organisation when that behaviour is at odds with its purpose. Imagine the cost to NHS morale (one of Deming's unknown and unknowable figures) of the knowledge that managing to the measure resulted in more deaths - the grotesque opposite of its aims. Hospitals are the extreme example of a general case. As such, they allow us a definitive rephrasing of our least favourite management mantra. What gets measured gets managed - so be sure you have the right measures, because the wrong ones kill.
「華人戴明學院」是戴明哲學的學習共同體 ，致力於淵博型智識系統的研究、推廣和運用。 The purpose of this blog is to advance the ideas and ideals of W. Edwards Deming.
- ► 2017 (51)
- ► 2016 (112)
- ► 2015 (113)
- ► 2014 (86)
- ► 2013 (125)
- ► 2012 (72)
- ► 2011 (117)
- ► 2010 (157)
- ► 2009 (203)
- ▼ 二月 (14)