「華人戴明學院」是戴明哲學的學習共同體 ,致力於淵博型智識系統的研究、推廣和運用。 The purpose of this blog is to advance the ideas and ideals of W. Edwards Deming.

2016年7月31日 星期日

Bill Gates Views Good Data as Key to Global Health. Error bars


Let’s talk about the Global Burden of Disease study. [The GBD is published on a nearly annual basis by the IHME.] Because this study is an independent and now—with the additional funding your foundation is providing—a regularly updated assessment of global health, it could in principle serve as a gold-standard reference of progress in various parts of the world on various diseases. Have you actually used it in that way to identify programs that are working and those that are not working and need redirection?
The GBD assembles data from lots of different field studies, many of which we are funding. For example, we ran a big study called GEMS[Global Enteric Multicenter Study] to try to figure out all the different causes of diarrheal disease—rotavirus is the biggest cause, but there’s alsoE. coli, shigella, cryptosporidium and others—and how important each one is. We still struggle with large uncertainty about the locations and extent of certain diseases, such as typhoid and cholera, which no country wants to admit they still have. My teams, like others who are very active in fieldwork, usually are looking at the primary papers as soon as they come out in the scientific literature. By the time the information gets aggregated and vetted and incorporated into the GBD database, it should no longer be surprising to us.
But GBD is super helpful when we’re talking to developing countries and saying “Look, here’s what is going on with tuberculosis in your country versus others like yours.” It’s a very important tool to educate people—like the World Development Report was for me. You can see the time progressions and zoom in on any country. It’s one of the better data-visualization sites in the entire Web. It’s super nice. And most people aren’t that up to date on these disease trends—particularly for infectious diseases. So I’ve been taking GBD charts with me when I’ve met with people in Cambodia or Indonesia or even at the French aid agency about trends in francophone Africa. They can reveal when we haven’t set the right priorities—so it’s a very important tool for me. Before I go into strategy meetings, I sometimes look at the GBD to remind myself of the numbers.
We now have enough detailed data to break big illness categories like diarrheal disease apart into separate diseases by the root cause. Even so, the error bars tend to be quite large on these estimates because, unlike in the rich world where disease cases are actually counted and tracked, in the poorer parts of the world we have to rely on sampling and extrapolation. If you happen to sample in places where the condition is unusually prevalent, the extrapolated numbers can be wrong.
That raises an interesting point. One of the potential advantages of having this statistical inference machine that IHME uses to produce the GBD estimates is that you could identify where you would get the most bang for the buck if you did a new study that will improve the empirical input to the system. Has the GBD actually been used to prioritize funding of surveillance in this way?
Oh, yeah. Disease surveillance in the poor world is terrible. While it’s great that we now have this published set of numbers, they have pretty big error bars—and probably some of the error bars should be even bigger than they are shown in the IHME reports. But we’re actively looking at ways to improve the situation. New diagnostics are becoming available, for example, that can check for lots of different diseases by analyzing just a few drops of blood. So rather than running one study after another, each of which has to set up a bunch of different centers just to get data on one kind of disease, we might be able to use clinics that are running all the time and constantly monitoring the prevalence of lots of diseases simultaneously.




From Wikipedia, the free encyclopedia
bar chart with confidence intervals (shown as red lines)
Error bars are a graphical representation of the variability of data and are used on graphs to indicate the error, or uncertainty in a reported measurement. They give a general idea of how precise a measurement is, or conversely, how far from the reported value the true (error free) value might be. Error bars often represent one standard deviation of uncertainty, one standard error, or a certain confidence interval (e.g., a 95% interval). These quantities are not the same and so the measure selected should be stated explicitly in the graph or supporting text.
Error bars can be used to compare visually two quantities if various other conditions hold. This can determine whether differences are statistically significant. Error bars can also suggestgoodness of fit of a given function, i.e., how well the function describes the data. Scientific papers in the experimental sciences are expected to include error bars on all graphs, though the practice differs somewhat between sciences, and each journal will have its own house style. It has also been shown that error bars can be used as a direct manipulation interface for controlling probabilistic algorithms for approximate computation.[1] Error bars can also be expressed in a plus-minus sign (±), plus the upper limit of the error and minus the lower limit of the error.[2]


*****
Bill Gates has a well-established knack for sifting through complex data sets to find the right pathways for making progress around the globe in health, education and economic development.
In an interview with Scientific American the philanthropist talks about the statistics that inspire him most
SCIENTIFICAMERICAN.COM|由 W. WAYT GIBBS 上傳

2016年7月27日 星期三

可運作定義:不當黨產條例通過後的新難題(張喻閔)

焦點評論:不當黨產條例通過後的新難題(張喻閔)



立法院三讀通過《政黨及其附隨組織不當取得財產處理條例》,將政黨及附隨組織自1945年8月15日起取得,或其自同日起交付、移轉或登記於受託管理人的財產,扣除黨費、政治獻金、競選經費補助金外,推定為不當黨產。
而黨產範圍之大、功能之廣,難以管窺其全貌。未來條例施行後,可能面臨如下爭議:
1.「不當」是否等同不法?
不當黨產來源包括接收日產、政府無償贈予、黨營事業等,但有憑藉權勢而取得之虞,是否就是「不當」?而何種態樣的「不當」才應被列為追討對象?將是未來主要爭議。此外,德國處理東德黨產的相關法律,許多法規在東西德合併前就已進行。換句話說,從威權政權垮台到開始處理,中間時間落差很短,而我國從歷經總統民選、政黨輪替,已成為民主政體約二十多年。事隔多年後清查與追討,複雜度勢必大增,對社會現況的衝擊未來也應審慎考量。 

徵收標準難以認定

2.何謂政黨「實質控制」的「附隨組織」?
條例定義「政黨」為民國76年7月15日解嚴前並依動員戡亂時期《人民團體法》規定備案者,而「附隨組織」則指政黨「實質控制」人事、財務或業務經營的法人、團體或機構,或曾由政黨實質控制,但後來以非相當對價轉讓的組織。試問,「實質控制」如何認定?多少比率的持股或是擁有多少人事任命權稱為「實質控制」的判準?恐怕是未來難解的問題。
3.不當黨產之「無正當理由」與「價額」如何認定?
條例指出,若經委員會認定屬於不當黨產,或「無正當理由」以無償或顯不相當對價,自政黨、附隨組織或其受託管理人取得或轉得之人於一定期間內移轉為國有、地方自治團體或原所有權人所有,應於一定期間內移轉為國家、地方或歸還原所有權人。若已移轉而無法返還,則追徵其價額,而財產移轉範圍,以移轉時之現存利益為限。 

行政訴訟曠日廢時

但所謂「無正當理由」應如何定義?若以已經移轉出售的帛琉大飯店為例,國民黨中投公司在1998年投資近20億,於帛琉興建帛琉大飯店,維持與帛琉間的外交關係。試問,擔負「邦誼永固」任務,算不算「正當理由」?而「現存利益」又應如何計算,是否比照公用徵收的標準?都是令人頭痛的問題!
4.最終依然回歸行政訴訟
條例規定,若不服委員會經聽證所為之處分者,得於處分書送達後二個月不變期間內,提起行政訴訟。換句話說,未來國民黨只要提起行政訴訟,就將黨產爭議回到法院戰場。屆時依然回歸法院來認定上述不確定法律概念,並且進入曠日廢時的訴訟程序。
《政黨及其附隨組織不當取得財產處理條例》,推動多年終於立法,絕對是時代的進步,值得肯定。但後續法律爭議與社會衝擊能否妥善解決,恐怕才是能否真正落實轉型正義的關鍵。 
政治大學博士候選人 

2016年7月20日 星期三

會腐蝕的鋁製 Apple Watch

First indication that galvanic corrosion is a potential problem ...

Many here are planning to get the aluminum Sport model and one of the bands from the stainless steel collection to wear to nicer occasions. However, all the non-Sport bands (except the Leather Loop) house stainless steel connectors, and we know that aluminum and stainless steel together poses a bi-metallic corrosion risk, also known as galvanic corrosion, especially in the presence of sweat (from working out or just from wearing the watch on a hot day).


一個用不到一年的手錶,竟然會腐蝕,而且不是個案,維修報價幾乎等於買全新的價錢,這不是我所認同的Apple的形象,我鼓勵有此問題的朋友們集結起來控訴Apple。
從2010.03寫到現在,我只是想寫 --…
HUNGSH-NTUCSIE.BLOGSPOT.COM|由 SHIH-HAO HUNG 上傳

2016年7月19日 星期二

VW Emissions Cheating Ran Deep and Wide, State Alleges (wsj)

VW Emissions Cheating Ran Deep and Wide, State Alleges
Deception went beyond ‘couple of software engineers,’ New York attorney general says

http://www.wsj.com/articles/vw-emissions-cheating-was-prolonged-widespread-new-n-y-l

Volkswagen AG’s emissions cheating spanned more than a decade and arose from deliberate efforts by dozens of employees to mislead regulators and consumers about diesel-powered vehicles, according to a lawsuit from New York’s top law-enforcement official.
The decision to use software to manipulate emissions tests traced back as far as 1999, when engineers at the company’s Audi luxury unit developed technology to quiet diesel vehicles, according to a lawsuit filed Tuesday by New York Attorney General Eric Schneiderman. The technology, rolled out in 2004, made vehicles exceed European emissions standards, so the engineers added software they called “acoustic function” to turn it off during emissions tests, the lawsuit said.
Volkswagen eventually developed the software, known as defeat devices, for diesel vehicles sold in the U.S. starting in 2008. The German auto maker in September admitted to U.S. environmental regulators that the vehicles used software that allowed them to pollute on the road up to 40 times the amount allowed. The Environmental Protection Agency’s subsequent disclosure of the cheating sparked probes across the globe and forced the resignation of Chief Executive Martin Winterkorn.
The latest lawsuit cites emails and other documents to allege a prolonged effort among dozens of Volkswagen employees in the U.S. and Germany to equip vehicles with the devices and stonewall inquiries from regulators. The deception went far beyond the “couple of software engineers” whom Michael Horn, then Volkswagen’s top U.S. executive, blamed in October, according to the lawsuit. Some managers, engineers and executives named in the suit haven’t been previously identified; others have been suspended or resigned since regulators disclosed the cheating.
In June, Volkswagen agreed to pay up to $15 billion to settle claims with environmental regulators, owners of 475,000 vehicles with two-liter diesel engines and some state authorities. The company still faces legal claims affecting more than 80,000 three-liter diesel vehicles and a U.S. Justice Department criminal probe. The software is on some 11 million vehicles world-wide, Volkswagen has said.
It is regrettable that some states have decided to sue for environmental claims now... 
—Volkswagen spokeswoman
Mr. Schneiderman’s suit seeks up to $450 million in civil penalties for what it calls Volkswagen’s “egregious and pervasive violations” that “strike at the heart” of state environmental laws, and were “the result of a willful and systematic scheme of cheating by dozens of employees at all levels of the company.” Massachusetts and Maryland filed similar lawsuits on Tuesday.
A Volkswagen spokeswoman called Tuesday’s allegations “essentially not new,” adding the company has been addressing them in discussions with U.S. and state authorities. “It is regrettable that some states have decided to sue for environmental claims now, notwithstanding their prior support of this ongoing federal-state collaborative process,” she said. Volkswagen declined interviews with those individuals mentioned in the New York suit.
In 2006, Volkswagen engineer James Liang developed a defeat device for a Jetta diesel car and eight years later conducted tests to help conceal emissions-cheating, according to the lawsuit. Oliver Schmidt, who headed Volkswagen’s U.S. regulatory compliance office in 2014 and early 2015 also played a key role in the deception, the lawsuit alleged.

Earlier


0:00 / 0:00


Volkswagen has settled emissions claims with regulators and owners of about a half million diesel-powered vehicles. The settlement terms were announced Tuesday by environmental regulators. The WSJ's Lee Hawkins explains.
A 2007 Volkswagen memo recapping a meeting with California regulators said a state official “expects emission-control systems to work during conditions outside of the emissions tests. Volkswagen agrees,” according to the lawsuit. Instead, Volkswagen engineers and executives developed and implemented emissions-increasing defeat devices “as part of the normal course of business,” the lawsuit alleged.
In May 2014, a senior Volkswagen executive warned then-CEO Mr. Winterkorn of growing suspicions from regulators and that the company couldn’t explain dramatic emissions increases, the lawsuit said. A year later, a manager admonished a Volkswagen official in the U.S. for allowing another employee to send a frank email expressing concerns, it added.
The emissions-cheating scandal began in earnest in late 2006, when Volkswagen, facing engineering challenges, adapted technology Audi had developed to address other emissions problems and installed the defeat-device software on hundreds of thousands of Jetta, Golf and other cars, the lawsuit alleged. Cars with model years 2009 through 2015 ended up with the software.
In October 2006, many Volkswagen executives held a conference call with California regulators, with the latter requesting additional details on emissions-control devices.
Do we need to discuss next steps? Come up with the story please!
—VW executive’s email
Later that year, Leonard Kata, a Volkswagen emissions regulations and certification manager, emailed colleagues that government officials were interested in whether emissions-control devices were illegal defeat devices and detailed how agencies make such determinations, said the lawsuit.
Volkswagen executives in subsequent years discussed the development and use of defeat devices to dupe emissions tests, including a direct report to Mr. Winterkorn, the CEO; heads of Audi’s powertrain development; and other Volkswagen division heads and employees below them, the lawsuit alleged.
After learning in spring 2015 that California regulators planned to conduct further tests on vehicles, internal Volkswagen emails “began to reflect desperation and panic,” the lawsuit said. One engineering executive conveyed “serious concern regarding” what California’s tests “would expose,” the lawsuit alleged. “Do we need to discuss next steps?” the engineering executive wrote, according to the lawsuit. “Come up with the story please!”
A senior emissions-compliance manager in the U.S. for Volkswagen, meanwhile, expressed concern in a May 2015 email that requests for information from California regulators were “not a normal process and that we are concerned that there may be possible future problems/risks involved.”
The email added that Volkswagen’s software was being reviewed by top officials at California’s environmental agency, according to the lawsuit.

"Clean energy won’t save us – only a new economic system can do that"

"What would we do with 100% clean energy? Exactly what we are doing with fossil fuels: raze more forests, build more meat farms, expand industrial agriculture, produce more cement, and fill more landfill sites, all of which will pump deadly amounts of greenhouse gas into the air. We will do these things because our economic system demands endless compound growth, and for some reason we have not thought to question this."
It’s time to pour our creative energies into imagining a new global economy. Infinite growth is a dangerous illusion
THEGUARDIAN.COM|由 JASON HICKEL 上傳

2016年7月17日 星期日

3,000 studies using brain scans could simply be wrong;法國內政部:"SAIP應用程序過晚發送了有關7月14日尼斯恐怖襲擊的消息"

Software glitches blow a hole in a lot of neuroscience
Two studies, one on neuroscience and one on palaeoclimatology, cast doubt on established results. First, neuroscience and the reliability of brain scanning
ECON.ST



(德國之聲中文網)法國內政部在今年歐洲杯之前推出一款名為SAIP的手機應用程序,旨在向恐襲附近的人發出及時警告。
而當週四(7月14日)夜間 尼斯發生恐怖襲擊後,這款應用在當地時間15日凌晨1:34才發出第一條訊息,晚了近三個小時。法國國慶日當晚,在南部海濱城市尼斯,一名31歲的突尼斯裔法國公民駕駛大型載重車衝入看煙火的人群,導致至少84人死亡,上百人受傷。
法國內政部在一份聲明中表示:"SAIP應用程序過晚發送了有關7月14日尼斯恐怖襲擊的消息。這款App的設計者已於週五下午召開了緊急會議。""相關人員立刻採取了行動,以保證這樣的事故不會再次發生。"
法國迴聲報(Les Échos)援引政府消息人士報導稱,當地政府準備好的訊息應該於23:15發出,然而這款法國公司Deveryware設計的應用軟件卻因技術故障而未能及時完成任務。

the Affordable Care,Journal of the American Medical Association;試驗登記不可或缺



A famous Columbia graduate just published an article in The Journal of the American Medical Association.



Paper in a Top Medical Journal Has an Unexpected Author
The JAMA paper highlights some of the successes of the Affordable Care Act
SCIENTIFICAMERICAN.COM|由 RACHAEL RETTNER,LIVESCIENCE 上傳





The JAMA paper highlights some of the successes of the Affordable Care Act

By Rachael Rettner, LiveScience on July 14, 2016It may be the first time a sitting president has authored a complete academic article — with an abstract, findings and conclusions — that's been published in a scientific journal, at least in recent history. Credit: Official White House Photo by Lawrence Jackson


In an unusual move for a sitting president, Barack Obama has published a scholarly paper in a scientific journal.

The paper, which discusses the success and future of the Affordable Care Act (ACA), was published Monday (July 11)in the prestigious medical journal JAMA.

It may be the first time a sitting president has authored a complete academic article — with an abstract, findings and conclusions — that's been published in a scientific journal, at least in recent history. However, several other presidents have written commentaries or opinion pieces that have been published in scientific journals during their presidency, including George W. Bush, who wrote about access to health care in a paper published in JAMA in 2004, and Bill Clinton, who wrote a commentary published in the journal Science in 1997.

Obama's journal article analyzes data gathered from other reports and studies, and highlights some of the successes of the ACA, including a drop in the percentage of Americans who do not have health insurance . After the act became law, the uninsured rate dropped by 43 percent, from 16 percent of Americans in 2010 to 9.1 percent in 2015, the paper says. [The 5 Strangest Presidential Elections in US History]

Still, Obama said, the country continues to face challenges on the way to improving its health care system. "Despite this progress, too many Americans still strain to pay for their physician visits and prescriptions, cover their deductibles, or pay their monthly insurance bills; struggle to navigate a complex, sometimes bewildering system; and remain uninsured ," Obama wrote.

To make sure Americans have enough insurance options and to keep insurance costs low, Obama encouraged Congress to revisit the "public option" plan, meaning a government-run insurance plan that would compete in the insurance marketplace alongside private plans. This public option could be available in parts of the country where insurance options are limited, he said.

Obama also recommended policies that could help reduce the cost of prescription drugs, including those that "give the federal government the authority to negotiate prices for certain high-priced drugs."

Obama's article was not peer-reviewed, but it went through several rounds of editing and fact-checking, according to Bloomberg.
Top 10 Ailing Presidents
Obamacare, Nixoncare: Health Care Debates Are All About Politics
7 Facts You Should Know About Health Care Reform

Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.







試驗登記不可或缺 2012英國《金融時報》專欄作家蒂姆•哈福德
我剛剛搖了幾次六面骰子,搖出的結果是:6、5、5、6、5。我的問題是:你覺得這個骰子有偏向性嗎?一種思路是,考慮在純隨機情況下只搖出5和6的概率。概率不大:242分之1。在你說我對骰子做了手腳之前,請先讓我說一些剛才忘記提到的事情:除了5和6之外,我還搖出了一次4、三次3、兩次2和一次1。但我對這些結果不感興趣,所以沒告訴你。如果你認為被忽略的結果也具有意義的話,這意味著你開始認識到“試驗登記”(trial register)的重要性——儘管這個詞聽上去有點書呆子。


每天,全球各地的研究人員都在進行隨機對照試驗(RCT)。他們中有些人一絲不苟、孜孜不倦地追求真理,也有些人則為了事關前途的論文或新藥配方的許可而投機取巧。但是,即使單次試驗結果的真實性不容置疑,如果系統性偏向某一特定結果,那麼試驗將毫無意義。


我報告了偏向大數字的搖骰子結果。你或許會不客氣地懷疑,在行業贊助的藥物臨床試驗中,顯示藥物良好效果的試驗更有可能公之於眾。你的懷疑是對的——本•高達可(Ben Goldacre)在新書《壞藥商》(Bad Pharma)中精闢地總結了這種現象。以混亂或災難告終的試驗有可能“失踪”:它​​們的價值也許夠不上發表學術論文,但結果必須記錄下來。如果試驗結果過於枯燥,令研究人員寫不出符合發表標準的論文,同樣會導致試驗“失踪”。


正如我的搖骰子試驗顯示的那樣,如果不看到每一次的試驗,就難以掌握事實的全貌。實現這個目標的方法很可能只有一種:強制登記試驗。進行試驗但扔掉結果或不發表結果的研究人員,應當遭到唾棄。人們在系統評議某一特殊領域時,將能夠查詢登記資料,追踪那些未發表的試驗。


幾年前,國際醫學雜誌編輯委員會(International Committee of Medical Journal Editors)宣布,在其控制下的知名期刊,將不再發表以臨床試驗為基礎、但事先未對試驗進行登記的研究。此舉效果驚人,顯著提高了新試驗的登記率。但不幸的是,西爾萬•馬蒂厄(Sylvain Mathieu)等人2009年在《美國醫學協會雜誌》(Journal of the American Medical Association)上解釋道,在他們檢查的研究中有超過半數無視了這一規則,卻仍然成功發表。未登記就不發表的威脅似乎是空洞的。


那麼,如何對待近年來飽嚐隨機試驗甜頭的經濟學?好消息是,美國經濟學會(American Economic Association)正要為經濟學試驗創立登記制度,將於明年實施。登記目前是自願的,但兩位領先經濟學家——麻省理工學院(MIT)的埃斯特•迪弗洛(Esther Duflo)和耶魯大學(Yale)的迪恩•卡蘭(Dean Karlan) ——對我表示,他們認為很有希望形成強大的社會規範,支持試驗登記。


試驗登記制度對社會科學構成了特殊的挑戰。醫學上的隨機對照試驗旨在測試具體療法的效果,而在社會科學領域,它們更有可能被用於尋找有意思的假說。社會科學的隨機對照試驗往往由學者和實際機構合作進行,試驗可能隨著時間推移而發生變化——臨床試驗則無此特徵。


這使得登記試驗和隨著情況的變化而修改登記內容變得複雜。它令試驗登記制度更難管理,但它也意味著登記制度更加不可或缺。


作者是英國廣播公司廣播四台(BBC Radio 4)《或多或少》(More or Less)節目的主持人譯者/劉鑫

2016年7月12日 星期二

Dave Kerridge ( -2013), 我們的導師

Facebook 說,我"在2009年7月13日和 Dave Kerridge 成為朋友。"其實,我們都只在上頭簽個名,沒發聲。
2013年春,Dave Kerridge過世,我用英文寫篇Remembering Dave Kerridge;同年10月21,我用中文雜記交情:Dave Kerridge ( -2013), 我們的導師。


紀念David Kerridge
今天讀一則新聞:
On Oct. 31, Q2 Music celebrates new music's favorite holiday, Halloween, with Q2's first 24-hour scarathon of hair-raising microtones, densely clustered choruses and heart-pumping slasher film suites: http://bit.ly/16kuHGv
查一單字字根寫進英文人行道”BLOG
其實這字根十幾年前英國的David Kerridge教授在email就教過我了.
今年他過世了. 我用英文寫篇感謝給其家人.
我們從未謀面.

-athon

Entry from World dictionary

suffix

  • forming nouns denoting an action or activity which is carried on for a very long time or on a very large scale, typically to raise funds for charity:talkathon walkathon

Origin:

on the pattern of (mar)athon





2013.5.13
Thank you for preparing this piece of David's story for us. The Funeral service will be on Monday 20th May at Kings College Chapel, Aberdeen University. Anyone who is able to attend is welcome.

If we could have your story of how you knew David and the input he had on your life before then, we would be grateful.

Thank You



Deborah Armstrong



2013.5.13晨
從法國朋友Jean-Marie Gogue知道David Kerridge教授的惡耗 ( 5月9日過世   20日舉行追思禮拜)
Dear Hanching,

It is with great sadness that I have learnt the death of David Kerridge.

Best wishes
Jean-Marie


De : "JOYCE Orsini [Staff/Faculty [Business]]"
Objet : Sad News about David Kerridge
Date : 12 mai 2013 16:41:56 UTC+02:00
À : Jean-Marie Gogue

Dear Jean-Marie,
I don't know if you knew David Kerridge.  If so, you may wish to know that he passed away on Thursday.  I received a message from his daughter to that effect.
She said:  "We are collating stories about Dad for his funeral service on Monday 20th May. If you have any you would like to share please email them to me dalife@hotmail.co.uk"
JO

我簡單給他回信:

Dear J-M,
Thanks for this information.
It is really a sad news.
I wished to have a collection of his and her daughter's essays published.
同時我給美國的Jo寫信說     我要寫David過去17年對我們的解惑和幫助

Dear Jo,
I received Dave's sad news from J-M.
I'll write his story of  helping  me to understand Deming's and Shewhart's philosophy  and shared his wisdom with the readers in Taiwan.
Please kindly tell me the deadline of my story.



Deming Papers
www.fr-deming.org, 1 Nov 1980 [cached]
David Kerridge, former professor at the Aberdeen University, was a leader of the British Deming Association. He assisted at Deming four-day seminars for many years, and lectured with him in a series of two-day seminars.




Dave and Sarah Kerridge, "Aristotle's Mistake or the Curious Incident of the Dog in the Night-Time"

Posing the right questions is more difficult than getting answers. Only the right question leads to the right answer. This short paper shows that by nature man is inclined to ask the wrong questions. A conscious effort is therefore required to look at things from a different viewpoint.
130 KB


Papers 2 &3 on Resistance to Change by David Kerridge

Paper 2

I want to suggest a theory about why there should be such resistance, and how it relates to our problems of spreading the Deming Philosophy.
I believe that the three approaches to statistical inference do not come just from differences about statistics. They correspond to three different views of what *science* is about. What follows is not an exact description of what philosophers say (or said) that scientists should do, but based on my own experience in using all three approaches, and observing what other scientists do.
  1. Logical/Mathematical
    • Science is concerned with logical proof. It therefore requires an all or nothing view - a theory is true or false, and must be accepted or rejected.
  2. Explanatory
    • Science is concerned with explanation - the reasons why things happen. Approximate models, like representing atoms by billiard balls, are useful, if they make the explanation easier to understand.
  3. Predictive
    • Science is prediction - no more, no less. Prediction must be based on observation, and observation defined in terms of action.
These three views of science correspond to different stages of development. The logical/mathematical view was unquestioned throughout the middle ages, and reached its peak with Descartes. Scientific truths were deduced by strict logic, starting with self-evident axioms, as in Euclidean geometry.
From the time of Isaac Newton, science changed. I believe (I have not checked the originals) that Newton presented his ideas in the old format, as deductions from axioms. So successful was he that some later writers claimed that Newton's physics is true, not because of observed facts, but by pure logic. But for most people, what Newton did was to provide an explanation - the force of gravity.
Explanation need not be exact. As George Box has put it: "All models are wrong. Some are useful."
At the beginning of the 20th century, both forms of science fell apart. There were two blows to previous thinking. First of all, many "self-evident" ideas turned out to be false. An example is Einstein's demonstration that time is relative. Secondly, the idea of explanation itself was called into question.

Quantum Theory, in particular, provides no explanation we can understand. But it predicts strange and unbelievable outcomes, and predicts them with amazing accuracy.
Most people are unaware that science has changed. Only those trained in theoretical physics (like Shewhart and Deming) have adopted the new philosophy of science. Others are stuck in the thinking of earlier centuries. And because knowledge is now so specialised and compartmentalised, few scientists are aware that different ideas are taken for granted in fields other than their own. We are dealing, in most cases, with unconscious assumptions, rather than conscious beliefs. That makes them far harder to deal with. It seems that many people cannot face a challenge to what they think is "obvious" - though the System of Profound Knowledge is one challenge after another.
My theory about the three approaches to statistics is as follows.
  1. Neyman and Pearson saw science in the logical/deductive mode, which is still common among mathematicians.
  2. R A Fisher had extensive experience of biological science. He became, in fact the head of the department of Genetics at Cambridge. Like most scientists, he saw science as explanation.
  3. Walter A Shewhart and W Edwards Deming saw science in terms of the new physics of prediction and action.
I started with statistics because the historical record is so striking. But the other examples are also well documented. Semmelweiss demonstrated that hygiene saved lives. But nothing was done until Pasteur explained the reason for it.
I apologise for what may seem to be lengthy theoretical rambling. But the strange thing about the Deming philosophy is that the most abstract ideas turn out to have direct practical applications. It is not surprising that science based on prediction and action is exactly what we need for management.
In my next instalment I hope to show that this helps us understand some of the difficulties we face.
-----------------------------------------------------------------------------------------------

Paper 3

In the previous post I mentioned conflicting models of science, based on either:
  1. Logical deduction
  2. Models and explanation
  3. Prediction
The idea of different models of science may seem remote from practical application. But as I expect we have all found for ourselves, in the Deming Philosophy nothing is "too theoretical" to affect our actions.
Walter A Shewhart certainly saw this. In his 1931 book, on "The Economic Control of Quality of Manufactured Product" he wrote:
"Progress in modifying our concept of control has been and will be comparatively slow. In the first place, it requires the application of certain modern physical concepts......"
He does not say which concepts. But we can reasonably relate this to the "Theory of Knowledge" dimension of the System of Profound Knowledge.

It seems to me - and this is purely a personal reflection which others may dispute - that the "Theory of Knowledge" aspect is the one that makes least impact. It may easily sound as if it contains nothing new. After all, most scientists, if asked, would say something similar. They would probably quote Karl Popper, who popularised this view of science, rather than C I Lewis or A N Whitehead, but the message is the same.
The difference is - again a personal opinion - that most people, whether scientists or laymen, pay lip-service to Popper, but continue to think in earlier modes. Most people see the whole point of science as explanation.
I have just been watching a television series that attempts to explain the ideas of "String Theory" for the layman. It showed one group of scientists arguing that "strings" may provide the "theory of everything" that unites Quantum Theory and Relativity, which are at present in conflict. Other scientists say "Strings are not a scientific theory" because they make no testable predictions. I can almost hear Shewhart laughing.
But "see" is the key word here. What we see determines how we act. WED describes the System of Profound Knowledge as a "lens". In other words, it enables us to bring some things into focus, and to see what we could not otherwise see.
What a pure scientist sees may change the whole world in the long run. What a manager sees changes everything now. To quote WED's own words:
"My job is not to tell managers what to do. It is to help them to see things that they could not otherwise be expected to see."
We have all seen how managers react to the Red Bead Experiment. The idea that it is wrong to look for an explanation of the red beads produced by a worker is profoundly shocking. Explanation is the lens through which they see the world. It is very hard to switch to thinking in terms of prediction.

This creates resistance to what Deming and Shewhart say about SPC, systems, and even psychology. The problem is all the greater because it is unconscious.






+++++

This is an expanded version of an article that Balestracci wrote for Quality Digest in December 2007.
--Editor
Idiscovered a wonderful unpublished paper by David and Sarah Kerridge several years ago (Click here to get a pdf). Its influence on my thinking has been nothing short of profound. As statistical methods get more and more embedded in everyday organizational quality improvements, I feel that now is the time to get us "back to basics"—but a set of basics that is woefully misunderstood, if taught at all. Professor Kerridge is an academic at the University of Aberdeen in Scotland, and I consider him one of the leading Deming thinkers in the world today.


Deming distinguished between two types of statistical study, which he called "enumerative" and "analytic." The key connection for quality improvement is about the way that statistics relates to reality and lays the foundation for a theory of using statistics.
Because everyday processes are usually not static "populations," the question becomes, "What other knowledge, beyond probability theory, is needed to form a basis for action in the real world?" The perspective from which virtually all college courses are taught—population based—invalidates many of its techniques in a work environment, as opposed to a strictly research environment.
To translate to medicine, there are three kinds of statistics:
  • Descriptive . What can I say about this specific patient?
  • Enumerative. What can I say about this specific group of patients?
  • Analytic. What can I say about the process that produced this specific group of patients and its results?
Let's suppose there is a claim that, as a result of a new infection-control policy, acquired-MRSA (methicillin-resistant Staphylococcus aureus, a strain of staph that is resistant to the broad-spectrum antibiotics commonly used to treat infections) in a particular hospital has been reduced by 27 percent—a result that would be extremely desirable if that kind of reduction could be produced in other hospitals, or in public health communities, by using the same methods. However, there are a great many questions to ask before we can act, even if the action is to design an experiment to find out more.
Counting the number of infections in different years is an enumerative problem (defining "acquired infection" and counting them for this specific hospital). Interpreting the change is an analytic problem.
Could the 27-percent reduction be due to chance? If we imagine a set of constant conditions, which would lead, on average, to 100 infections, we can, on the simplest mathematical model (Poisson counts), expect the number we actually see to be anything between 70 and 130. If there were 130 infections one year, and 70 infections the next year, people would think that there had been a great improvement—but this could be just chance. This is the least of our problems.
Some of the infections may be related, as in a temporary outbreak or pandemic. If so, the model is wrong, because it assumes that infections are independent; or the methods of counting might have changed from one year to the next (Are you counting all suspicious infections, or only confirmed cases?). Without knowing about such things we cannot predict from these figures what will happen next year. So if we want to draw the conclusion that the 27-percent reduction is a "real" one, that is, one which will continue in the future, we must use knowledge about the problem that is not given by those figures alone.
Even less can we predict accurately what would happen in a different hospital, or a different country. The causes of infection, or the effect of a change in infection control methods, may be completely different.
So this is the context of the distinction between enumerative and analytic uses of statistics. Some things can be determined by calculation alone, others require the use of judgment or knowledge of the subject, others are almost unknowable. Luckily, your actions to get more information inherently improve the situation, because when you understand the sources of uncertainty, you understand how to reduce it.
Most mathematical statisticians state statistical problems in terms of repeated sampling from the same population. This leads to a very simple mathematical theory, but does not relate to the real needs of the statistical user. You cannot take repeated samples from the exact same population, except in rare cases. It's a different kind of problem—sampling from an imaginary population.

網誌存檔