「華人戴明學院」是戴明哲學的學習共同體 ,致力於淵博型智識系統的研究、推廣和運用。 The purpose of this blog is to advance the ideas and ideals of W. Edwards Deming.

2012年11月19日 星期一

SPC: From Chaos To Wiping the Floor 2003


無意中發現10年前的這一簡介/貼文


SPC: From Chaos To Wiping the Floor by Lynne Hare (Quality Progress. July 2003) www.asq.org/pub/qualityprogress/past/0703/58spc0703.html
或許可以翻譯為:SPC(統計流程管制):從渾沌到所向披靡
解釋一下副標題用語
mop up the floor with wipe the floor with.
Defeat thoroughly, overwhelm, as in The young boxer said he was sure to mop up the floor with his opponent, or I just know we'll wipe the floor with the competition. [Late 1800s]


wipe the floor with sb INFORMAL
to defeat someone very easily:
"I hear Italy beat France in the semi-finals last night." "Beat them? They wiped the floor with them!"



SPC: From Chaos To Wiping the Floor

by Lynne Hare

In 50 Words Or Less
  • Statistical process control (SPC) began with Walter Shewhart in the 1920s.
  • SPC eventually became more than the application of control charts and was used to tame manufacturing processes.
  • It now extends to processes beyond those in manufacturing and is fundamental to maintaining a corporate competitive edge.
You can almost hear Walter Shewhart's detractors saying if he had been more careful in his setup, he would not be so disappointed his experiment did not come out as he had predicted it would. He ignored advice like "Make sure the temperature is right" and "Did you remember to check the air flow?" He knew he had done those things right, and still the answers did not come out exactly as the predictive model suggested.

With the words of second guessers echoing, Shewhart struggled, as we all do, during the chaotic front end of scientific inquiry. If you recall how you stared at a blank sheet of paper or screen trying to develop an approach to your first, and every subsequent, term paper, you will remember the feeling Shewhart must have had as he dwelled on the nature of variation. Chaotic beginnings are always uncomfortable.

It was not popular in Shewhart's day to think of every data point as possessing both a deterministic and a random component even though statistical concepts had been around for many years. Still, with papal infallibility, the statement "All chance is but direction thou canst not see"1 loomed large. If we only had perfect knowledge, reasoned Shewhart, the random component would disappear. But who has time to wait for perfect knowledge? There is work to do. There are decisions to be made. We must have a method now.

Time To Apply Existing Theory
So, how much of a scientific observation is deterministic, and how much is random? Shewhart, a physicist and an empiricist who was up against problems of process control, devised a way to learn. The solution, he concluded in delightful understatement, "... requires the application of statistical methods which, up to the present time, have been for the most part left undisturbed in the journals in which they appeared."2

He began with the definition of control: "A phenomenon will be said to be controlled when, through the use of past experience, we can predict, at least within limits, how the phenomenon may be expected to vary in the future. Here it is understood that prediction within limits means we can state, at least approximately, the probability the observed phenomenon will fall within the given limits."3

From there, he went on to distinguish chance causes from assignable causes of variation--the deterministic element from the random element--believing the assignable causes of variation could be found and eliminated. The first published control chart issued as an internal Bell Telephone Laboratories report in May of 1924 was established to do just that.

The Economics of Change
Shewhart based control chart limits more on the economics of change than on underlying probabilities. Ever the empiricist, Shewhart seems not to have trusted probability limits alone. Instead, he drew his run charts, the basis of control charts, with limits based on marked improvements from previous periods of inspection (see "The First Control Chart"). He did bow in the direction of statistical theory, but theory alone did not form the limits.

His general reliance on practicality is exemplified by his famous experiments with a large serving bowl (see "Whatever Happened to That Bowl?" p. 61). Borrowed from Shewhart's own kitchen, it was used to hold numbered, metal lined, disk shaped tags, which he drew at random to confirm the standard deviation of subgroup sample means is the standard deviation of individual samples divided by the square root of the subgroup size.4 In symbols, Shewhart was seeking to satisfy himself of the relationship:
HareEq
where s is the standard deviation of the mean, called the standard error, s is the standard deviation of individual observations and n is the number of observations in each subgroup mean.

As his technology progressed, Shewhart introduced the extremely important concept of rational subgrouping. In the simplest of processes, a rational subgroup might consist of several items produced during a relatively short time period.

In more complicated processes, however, a rational subgroup should be representative of production and inclusive of potential structural variation. For example, dimensions of pieces from a six-lane stamping machine should be sampled in sets of six. (See "Structural Variation" for more clarification.) While this seems obvious, it is surprising how often this principle is ignored in practice. Shewhart recognized its importance and included it in his early work.

SPC's Transformation
Since those early days of the development of statistical process control (SPC) there have been contributors too numerous to list. The early developers included Harold Dodge, Harry Romig and W. Edwards Deming. Also included are Eugene Grant, author of the classic text Statistical Quality Control, first published in 1946,5 and the authors at Western Electric who first published the Statistical Quality Control Handbook in 1956.6

During this time, the formation of control chart limits transitioned from Shewhart's original concept of economic limits to probability limits usually based on group variation. Also, the term SPC began to stick and came to mean much more than the application of control charts alone. Topics such as acceptance sampling, data analysis and interpretation, and managing for quality were folded into the discipline.

The Rules
SPC is the application of statistical tools to the improvement of quality and productivity. Volumes of excellent advice concerning its use are available, including works by Deming7 and Joseph M. Juran.8

Preeminent among the SPC tools are control charts. Perhaps because of the inexactness of the science, control charts have been subject to differing opinions regarding their use, prompting a sea of debate. Their having served well despite these differing opinions is a tribute to the ruggedness of Shewhart's invention. Debated issues include the appropriateness of control charts, derivation of their limits, presence or absence of specification limits, sampling frequency, action rules and software.

Some experts think you should never expect the process mean to be stable, as Shewhart's control chart for the mean implies. Therefore, Shewhart methods suffer from a faulty assumption from the beginning. The argument is that process means will drift with any system of causes whether they are common or assignable, and in place of Shewhart control charts, one should use a predictive function based on time series methods, such as exponentially weighted moving averages (EWMA).9

The strategy is to use the model to predict the next process result. Then, if the observed and predicted values differ by more than chance alone would allow, there is sufficient information to persuade the user a process change has occurred. The user should take corrective action by adjusting the process or seeking the cause of the change or both. This line of reasoning has much merit, and EWMA and other time series methods serve well. So do Shewhart charts. They are different models for the same systems, and as George Box is fond of saying, "All models are wrong. Some are useful."10

Departing from Shewhart's initial notion, textbooks teach that control chart limits should be based on probability. Limits are usually plus or minus three standard errors from the mean where the standard error is the pooled standard deviation of the within time observations divided by the square root of the sample size.

You can imagine taking repeated sets of, say, five observations each from a manufacturing line over a shift's production. Combining these subgroup standard deviations into a single estimate, dividing that estimate by the square root of the sample size, multiplying the result by 3 and then adding and subtracting it from the target or central value produces these probability limits. Combining the standard deviations is accomplished by pooling using the equation:
HareEq
where sp is the pooled standard deviation, k is the number of subgroups taken and the , i = 1, 2, ... , k are the squares of the individual subgroup standard deviations.
The control limits would be the mean or target value ±3 times sp n. This assumes the sample sizes are the same at each sampling time. Other, more sophisticated procedures can be found in this article's references.

It is disappointing to see some users place specification limits on control charts. Processes don't know or even care about specifications. The presence of specification limits on control charts encourages users to adjust on the basis of them instead of the calculated limits. The resulting miscued adjustments are likely to result in increased process variation, which is the opposite of the intent.

If treated well, processes will behave nearly consistently with capability, which is meant to be a measure of intrinsic, inherent variation. The use of specification limits is, most likely, an attempt to alert users when the process is creating units that are out of specification. Such efforts are misguided because most control charts work with sample means, not individual observations, and means have a narrower distribution than individual observations do.
Textbooks also teach that in order to establish control limits it is necessary to bring the process under control. This is as baffling as the huckster who promises to reveal techniques for making several million dollars, beginning his spiel by saying, "First, get one million dollars. Now ..."

By what means should you bring the process into control, and how, without a control chart, would you know if the process is in control? No process is ever in strict control if by that we mean it always performs as best it can. There is always some source of variation acting on any process.

A good start might be to collect subgroup samples in accordance with Shewhart's principle of rational subgroup sampling; quantify intrinsic, inherent process capability variation, structural variation and assignable cause variation; and then base the initial limits on the pooled within group variation, with allowance for some small structural variation. Procedures for doing this are explained in detail in "Statistics Roundtable: Chicken Soup for Processes" in the August 2001 issue of Quality Progress.11

Sampling frequency is always a bone of contention. Numerous papers have been written about establishment of sampling frequency on the basis of the costs of sampling, being out of control and adjustment. Many of the required input elements appear to be unknown and unknowable.

A parallel consideration is the use of run rules to augment control chart action rules, making them more powerful to detect departures from target. References consider corrective action necessary when two consecutive observations, for example, appear on the same side of the centerline between two and three standard errors from the centerline. Corrective action is also called for when seven consecutive observations appear on one side of the centerline.
There are many more such rules, and practitioners are confused about which rules to invoke. Average run length calculations can be helpful in these situations. Action rules trigger adjustment activity, and various adjustment rules such as "adjust by an amount that is equal to the difference between the last out of control observation and the target" find practical application. Such rules are ad hoc, and more work is needed to find adjustment rules that are both easy to use and effective. Practitioners would do well to recall the lessons learned from Deming's funnel experiment.12

SPC Doesn't Just Happen
SPC software is plentiful and relatively inexpensive, and organizational managers have fallen prey to the temptation to "install" SPC in manufacturing settings without understanding it themselves or providing training for users and their intermediate management. Auditors often find dust covered computer screens and process adjustments not justified by the charts. The disappointing results have caused some managers to believe SPC will not work in their processes.

Understandably, a strong desire to please customers encourages software vendors to compute every statistic known to humankind (and some not) to be able to claim their software will give their customers what's needed. The quality gurus, each of whom has favorite methods, encourage this. But with each added Cp, Cpk, skewness and kurtosis, the confusion mounts. What webs we weave!

Despite all the controversy, control charts and the rest of the tools used in SPC--check sheets, run charts, flow diagrams, histograms, Pareto charts and scatter diagrams, to name a few--have served remarkably well.

It's a Mindset
Some might ask why SPC didn't last. It did. People began to understand SPC is more than a set of statistical tools. It is a way of taming manufacturing processes, and its applications extend to all sorts of processes beyond manufacturing. It is a means of reducing process variation, working for the betterment of us all. It is fundamental to good process management, and the thinking is fundamental to good management, period.

We stray occasionally as we get lured into fad systems, but then we come back to basics. As Michael Hammer13 said in harsh metaphor:
From Wal-Mart and cross-docking to Toyota and just-in-time, these companies know winning does not depend on a clever plan or a hot concept. It depends on how regular, mundane, basic work is carried out. If you can consistently do your work faster, cheaper and better than the other guy, then you get to wipe the floor with him--without any accounting tricks. Relentless operational innovation is the only way to establish a lasting advantage. And new ideas are popping up all over.
Operational innovation isn't glamorous. It doesn't make for amusing cocktail party conversation, and it's unlikely to turn up in the world of glam-business journalism. It's detailed and nerdy. This is old business, this is new business, this is real business. Get used to it.
SPC grew into everything we do. It changed the way we think, work and act, and it evolved into total quality management (TQM). But there is an ebb and flow to new technologies in our society, and TQM's star faded in the presence of reengineering, which faded with the advent of Six Sigma. All the while, the basic SPC tools have been refined and augmented, and SPC serves in muted presence to underpin the newer, expanded technologies.

REFERENCES
1. Walter A. Shewhart, Economic Control of Quality of Manufactured Product, Van Norstrand, 1931; republished by ASQ Quality Press, 1980.
2. Ibid.
3. Ibid.
4. E.R. Ott, comments in "Tributes to Walter A. Shewhart," Industrial Quality Control, August 1967.
5. E.L. Grant and R.S. Leavenworth, Statistical Quality Control, seventh edition, McGraw-Hill, 1996.
6. Statistical Quality Control Handbook, Western Electric, 1956.
7. W. Edwards Deming, Out of the Crisis, M.I.T. Center for Advanced Engineering Study, 1986.
8. J.M. Juran and A. Blanton Godfrey, Juran's Quality Handbook, fifth edition, McGraw-Hill, 1998.
9. George Box and Alberto Luceno, Statistical Control: By Monitoring and Feedback Adjustment, Wiley, 1997.
10. Robert Launer, Robustness in Statistics, Academic Press, 1979.
11. Lynne B. Hare, "Statistics Roundtable: Chicken Soup for Processes," Quality Progress, August 2001, p. 76.
12. Deming, Out of the Crisis, see reference 7.
13. Michael Hammer, "Forward to Basics," Fast Company, November 2002.


LYNNE B. HARE is associate director of technology guidance at Kraft Foods Research in East Hanover, NJ. He received a doctorate in statistics from Rutgers University, New Brunswick, NJ. Hare is a past chairman of ASQ's Statistics Division and an ASQ Fellow.



Cross-docking is a practice in logistics of unloading materials from an incoming semi-trailer truck or railroad car and loading these materials directly into outbound trucks, trailers, or rail cars, with little or no storage in between. This may be done to change type of conveyance, to sort material intended for different destinations, or to combine material from different origins into transport vehicles (or containers) with the same, or similar destination.
Cross-Dock operations were first pioneered in the US trucking industry in the 1930s, and have been in continuous use in LTL (less than truckload) operations ever since. The US Military began utilizing cross-dock operations in the 1950s. Wal-Mart began utilizing cross-docking in the retail sector in the late 1980s.
In the LTL trucking industry, cross-docking is done by moving cargo from one transport vehicle directly into another, with minimal or no warehousing. In retail practice, cross-docking operations may utilize staging areas where inbound materials are sorted, consolidated, and stored until the outbound shipment is complete and ready to ship.

沒有留言:

網誌存檔