「華人戴明學院」是戴明哲學的學習共同體 ,致力於淵博型智識系統的研究、推廣和運用。 The purpose of this blog is to advance the ideas and ideals of W. Edwards Deming.

2008年2月10日 星期日

The rule is simple: be careful what you measure

這是英國專欄作者 他經常提到Deming博士
這一主題在去年的一次研習會中 署立醫院的最高主管表示過類似的意見

The rule is simple: be careful what you measure
Simon Caulkin, management editor
The Observer,
Sunday February 10 2008
Article history ·
Contact us
Contact usClose
Contact the Business editorbusiness.editor@guardianunlimited.co.uk
Report errors or inaccuracies: reader@observer.co.uk
Letters for publication should be sent to: letters@guardian.co.uk
If you need help using the site: userhelp@guardian.co.uk
Call the main Guardian and Observer switchboard: +44 (0)20 7278 2332
Advertising guide
License/buy our content
About this articleClose
This article appeared in the Observer on Sunday February 10 2008 on p8 of the Business news & features section. It was last updated at 00:11 on February 10 2008.
If there's one management platitude that should have been throttled at birth, it's 'what gets measured gets managed'. It's not that it's not true - it is - but it is often misunderstood, with disastrous consequences.
The full proposition is: 'What gets measured gets managed - even when it's pointless to measure and manage it, and even if it harms the purpose of the organisation to do so.' In the truncated version, there are two lethal pitfalls. The first is the implication that management is only about measuring. Way back in 1956, the academic V F Ridgway famously noted the dysfunctional consequences of managers' tendency to reduce as many as possible of their concerns to numbers.
Quality guru W Edwards Deming went further, putting 'management by use only of visible figures, with little consideration of figures that are unknown or unknowable' at No 5 in his list of seven deadly management diseases. Henry Mintzberg, the sanest of management educators, proposed that starting 'from the premise that we can't measure what matters' gives managers the best chance of realistically facing up to their challenge.
In the past few years, of course, spurious measurement has proliferated beyond, well, measure. Although regulation comes into it, this is often the consequence of IT systems that can measure anything that moves - the number of telephone rings, how long calls take and cost and how many calls a person makes an hour, for instance. The figures can be on a manager's desk the same evening.
But just because you can measure it, doesn't mean you should. All the above, although standard in call centres, are generally pointless, because they only tell you about levels of activity, not about how well the call centre's purpose is being achieved. Thus, a call centre may boast high productivity and low costs per call but that's irrelevant if most of its activity is mopping up customer complaints about poor service. Activity measures prevent managers from seeing that cheaper calls aren't the answer: better to improve the service so that they don't need a call centre with all its associated costs in the first place.
It gets worse when activity measures form the basis of contracts with suppliers, as they often do in the hard-nosed-sounding guise of 'payment by results'. Payment by results, whether for calls answered, appointments made or patients seen, is actually 'payment by activity' - activity that doesn't necessarily advance and may actually obstruct the overall purpose. As in the call-centre example above, a contractor paid by 'results' (ie activity) has no incentive to improve service, which would reduce the number of calls, and hence payment, and every incentive to worsen it, by cutting the time spent on calls as they inexorably increase in number.
Here we encounter the second problem with the measurement-management equation. All too often in a kind of Gresham's law (which said bad money drives out good), the easy-to-measure drives out the hard, even when the latter is more important. Strategy writer Igor Ansoff said: 'Corporate managers start off trying to manage what they want, and finish up wanting what they can measure.'

What happens when bad measures drive out good is strikingly described in an article in the current Economic Journal. Investigating the effects of competition in the NHS, Carol Propper and her colleagues made an extraordinary discovery. Under competition, hospitals improved their patient waiting times. At the same time, the death-rate following emergency heart-attack admissions substantially increased. Why? As targets, waiting times were and are measured (and what gets measured gets managed, right?). Emergency heart-attack deaths were not tracked and therefore not managed. Even though no one would argue that the trade-off - shorter waiting times but more deaths - was anything but a travesty of NHS purpose, that's what the choice of measure produced.

As the paper observes: 'It seems unlikely that hospitals deliberately set out to decrease survival rates. What is more likely is that in response to competitive pressures on costs, hospitals cut services that affected [heart-attack] mortality rates, which were unobserved, in order to increase other activities which buyers could better observe.'

In other words, what gets measured, matters. Measures set up incentives that drive people's behaviour. And woe to the organisation when that behaviour is at odds with its purpose. Imagine the cost to NHS morale (one of Deming's unknown and unknowable figures) of the knowledge that managing to the measure resulted in more deaths - the grotesque opposite of its aims. Hospitals are the extreme example of a general case. As such, they allow us a definitive rephrasing of our least favourite management mantra. What gets measured gets managed - so be sure you have the right measures, because the wrong ones kill.
simon.caulkin@observer.co.uk

沒有留言:

網誌存檔