"...if the true effect of what you are measuring is small,” said Andrew Gelman, a professor of statistics and political science at Columbia University, “then by necessity anything you discover is going to be an overestimate” of that effect.Sorry NYTimes for the length of the quote, but the example was too good to break up.
Consider the following experiment. Suppose there was reason to believe that a coin was slightly weighted toward heads. In a test, the coin comes up heads 527 times out of 1,000.
Is this significant evidence that the coin is weighted?
Classical analysis says yes. With a fair coin, the chances of getting 527 or more heads in 1,000 flips is less than 1 in 20, or 5 percent, the conventional cutoff. To put it another way: the experiment finds evidence of a weighted coin “with 95 percent confidence.”
Yet many statisticians do not buy it. One in 20 is the probability of getting any number of heads above 526 in 1,000 throws. That is, it is the sum of the probability of flipping 527, the probability of flipping 528, 529 and so on.
But the experiment did not find all of the numbers in that range; it found just one — 527. It is thus more accurate, these experts say, to calculate the probability of getting that one number — 527 — if the coin is weighted, and compare it with the probability of getting the same number if the coin is fair."
Finance News, Academic articles, and other things from FinanceProfessor.com. Remember Finance is not only important, but it is also fun!!!
Tuesday, January 18, 2011
ESP Report Sets Off Debate on Data Analysis - NYTimes.com
ESP Report Sets Off Debate on Data Analysis - NYTimes.com:
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment