Odds are, it's wrong: science fails to face the shortcomings of statistics.For better or for worse, science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science's fidelity to fact and conferred a timeless reliability to its findings. During the past century, though, a mutant form of math has deflected science's heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot. It's science's dirtiest secret: The "scientific method" of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing. Replicating a result helps establish its validity more securely, but the common tactic of combining numerous studies into one analysis, while sound in principle, is seldom conducted properly in practice. Experts in the math of probability and statistics are well aware of these problems and have for decades expressed concern about them in major journals. Over the years, hundreds of published papers have warned that science's love affair with statistics has spawned countless illegitimate findings. In fact, if you believe what you read in the scientific literature, you shouldn't believe what you read in the scientific literature. "There is increasing concern," declared epidemiologist John Ioannidis in a highly cited 2005 paper in PLoS Medicine, "that in modern research, false findings maybe the majority or even the vast majority of published research claims." Ioannidis claimed to prove that more than half of published findings are false, but his analysis came under fire for statistical shortcomings of its own. "It may be true, but he didn't prove it," says biostatistician Steven Goodman of the Johns Hopkins University School of Public Health. On the other hand, says Goodman, the basic message stands. "There are more false claims made in the medical literature than anybody appreciates," he says. "There's no question about that." Nobody contends that all of science is wrong, or that it hasn't compiled an impressive array of truths about the natural world. Still, any single scientific study alone is quite likely to be incorrect, thanks largely to the fact that the standard statistical system for drawing conclusions is, in essence, illogical. "A lot of scientists don't understand statistics," says Goodman. "And they don't understand statistics because the statistics don't make sense." Statistical insignificance Nowhere are the problems with statistics more blatant than in studies of genetic influences on disease. In 2007, for instance, researchers combing the medical literature found numerous studies linking a total of 85 genetic variants in 70 different genes to acute coronary syndrome, a cluster of heart problems. When the researchers compared genetic tests of 811 patients that had the syndrome with a group of 650 (matched for sex and age) that didn't, only one of the suspect gene variants turned up substantially more often in those with the syndromea number to be expected by chance. "Our null results provide no support for the hypothesis that any of the 85 genetic variants tested is a susceptibility factor" for the syndrome, the researchers reported in the Journal of the American Medical Association. How could so many studies be wrong? Because their conclusions relied on "statistical significance," a concept at the heart of the mathematical analysis of modern scientific experiments. Statistical significance is a phrase that every science graduate student learns, but few comprehend. While its origins stretch back at least to the 19th century, the modern notion was pioneered by the mathematician Ronald A. Fisher in the 1920s. His original interest was agriculture. He sought a test of whether variation in crop yields was due to some specific intervention (say, fertilizer) or merely reflected random factors beyond experimental control. Fisher first assumed that fertilizer caused no differencethe "no effect" or "null" hypothesis. He then calculated a number called the P value, the probability that an observed yield in a fertilized field would occur if fertilizer had no real effect. If P is less than .05meaning the chance of a fluke is less than 5 percentthe result should be declared "statistically significant," Fisher arbitrarily declared, and the no effect hypothesis should be rejected, supposedly confirming that fertilizer works. [GRAPHIC OMITTED] Fisher's P value eventually became the ultimate arbiter of credibility for science results of all sortswhether testing the health effects of pollutants, the curative powers of new drugs or the effect of genes on behavior. In various forms, testing for statistical significance pervades most of scientific and medical research to this day. But in fact, there's no logical basis for using a P value from a single study to draw any conclusion. If the chance of a fluke is less than 5 percent, two possible conclusions remain: There is a real effect, or the result is an improbable fluke. Fisher's method offers no way to know which is which. On the other hand, if a study finds no statistically significant effect, that doesn't prove anything, either. Perhaps the effect doesn't exist, or maybe the statistical test wasn't powerful enough to detect a small but real effect. "That test itself is neither necessary nor sufficient for proving a scientific result," asserts Stephen Ziliak, an economic historian at Roosevelt University in Chicago. Soon after Fisher established his system of statistical significance, it was attacked by other mathematicians, notably Egon Pearson and Jerzy Neyman. Rather than testing a null hypothesis, they argued, it made more sense to test competing hypotheses against one another. That approach also produces a P value, which is used to gauge the likelihood of a "false positive"concluding an effect is real when it actually isn't. What eventually emerged was a hybrid mix of the mutually inconsistent Fisher and NeymanPearson approaches, which has rendered interpretations of standard statistics muddled at best and simply erroneous at worst. As a result, most scientists are confused about the meaning of a P value or how to interpret it. "It's almost never, ever, ever stated correctly, what it means," says Goodman. Correctly phrased, experimental data yielding a P value of .05 means that there is only a 5 percent chance of obtaining the observed (or more extreme) result if no real effect exists (that is, if the nodifference hypothesis is correct). But many explanations mangle the subtleties in that definition. A recent popular book on issues involving science, for example, states a commonly held misperception about the meaning of statistical significance at the .05 level: "This means that it is 95 percent certain that the observed difference between groups, or sets of samples, is real and could not have arisen by chance." That interpretation commits an egregious logical error (technical term: "transposed conditional"): confusing the odds of getting a result (if a hypothesis is true) with the odds favoring the hypothesis if you observe that result. A wellfed dog may seldom bark, but observing the rare bark does not imply that the dog is hungry. A dog may bark 5 percent of the time even if it is wellfed all of the time. Another common error equates statistical significance to "significance" in the ordinary use of the word. Because of the way statistical formulas work, a study with a very large sample can detect "statistical significance" for a small effect that is meaningless in practical terms. A new drug may be statistically better than an old drug, but for every thousand people you treat you might get just one or two additional curesnot clinically significant. Similarly, when studies claim that a chemical causes a "significantly increased risk of cancer," they often mean that it is just statistically significant, possibly posing only a tiny absolute increase in risk. Statisticians perpetually caution against mistaking statistical significance for practical importance, but scientific papers commit that error often. Ziliak studied journals from various fieldspsychology, medicine and economics among othersand reported frequent disregard for the distinction. "I found that eight or nine of every 10 articles published in the leading journals make the fatal substitution" of equating statistical significance to importance, he said in an interview. Ziliak's data are documented in the 2008 book The Cult of Statistical Significance, coauthored with Deirdre McCloskey of the University of Illinois at Chicago. Multiplicity of mistakes Even when "significance" is properly defined and P values are carefully calculated, statistical inference is plagued by many other problems. Chief among them is the "multiplicity" issuethe testing of many hypotheses simultaneously. When several drugs are tested at once, or a single drug is tested on several groups, chances of getting a statistically significant but false result rise rapidly. Experiments on altered gene activity in diseases may test 20,000 genes at once, for instance. Using a P value of .05, such studies could find 1,000 genes that appear to differ even if none are actually involved in the disease. Setting a higher threshold of statistical significance will eliminate some of those flukes, but only at the cost of eliminating truly changed genes from the list. In metabolic diseases such as diabetes, for example, many genes truly differ in activity, but the changes are so small that statistical tests will dismiss most as mere fluctuations. Of hundreds of genes that misbehave, standard stats might identify only one or two. Altering the threshold to nab 80 percent of the true culprits might produce a list of 13,000 genesof which over 12,000 are actually innocent. Recognizing these problems, some researchers now calculate a "false discovery rate" to warn of flukes disguised as real effects. And genetics researchers have begun using "genomewide association studies" that attempt to ameliorate the multiplicity issue (SN: 6/21/08, p. 20). Many researchers now also commonly report results with confidence intervals, similar to the margins of error reported in opinion polls. Such intervals, usually given as a range that should include the actual value with 95 percent confidence, do convey a better sense of how precise a finding is. But the 95 percent confidence calculation is based on the same math as the .05 P value and so still shares some of its problems. Clinical trials and errors Statistical problems also afflict the "gold standard" for medical research, the randomized, controlled clinical trials that test drugs for their ability to cure or their power to harm. Such trials assign patients at random to receive either the substance being tested or a placebo, typically a sugar pill; random selection supposedly guarantees that patients' personal characteristics won't bias the choice of who gets the actual treatment. But in practice, selection biases may still occur, Vance Berger and Sherri Weinstein noted in 2004 in Controlled Clinical Trials. "Some of the benefits ascribed to randomization, for example that it eliminates all selection bias, can better be described as fantasy than reality," they wrote. Randomization also should ensure that unknown differences among individuals are mixed in roughly the same proportions in the groups being tested. But statistics do not guarantee an equal distribution any more than they prohibit l0 heads in a row when flipping a penny. With thousands of clinical trials in progress, some will not be well randomized. And DNA differs at more than a million spots in the human genetic catalog, so even in a single trial differences may not be evenly mixed. In a sufficiently large trial, unrandomized factors may balance out, if some have positive effects and some are negative. Still, trial results are reported as averages that may obscure individual differences, masking beneficial or harmful effects and possibly leading to approval of drugs that are deadly for some and denial of effective treatment to others. "Determining the best treatment for a particular patient is fundamentally different from determining which treatment is best on average," physicians David Kent and Rodney Hayward wrote in American Scientist in 2007. "Reporting a single number gives the misleading impression that the treatmenteffect is a property of the drug rather than of the interaction between the drug and the complex riskbenefit profile of a particular group of patients." Another concern is the common strategy of combining results from many trials into a single "metaanalysis," a study of studies. In a single trial with relatively few participants, statistical tests may not detect small but real and possibly important effects. In principle, combining smaller studies to create a larger sample would allow the tests to detect such small effects. But statistical techniques for doing so are valid only if certain criteria are met. For one thing, all the studies conducted on the drug must be included published and unpublished. And all the studies should have been performed in a similar way, using the same protocols, definitions, types of patients and doses. When combining studies with differences, it is necessary first to show that those differences would not affect the analysis, Goodman notes, but that seldom happens. "That's not a formal part of most metaanalyses," he says. Metaanalyses have produced many controversial conclusions. Common claims that antidepressants work no better than placebos, for example, are based on metaanalyses that do not conform to the criteria that would confer validity. Similar problems afflicted a 2007 metaanalysis, published in the New England Journal of Medicine, that attributed increased heart attack risk to the diabetes drug Avandia. Raw data from the combined trials showed that only 55 people in 10,000 had heart attacks when using Avandia, compared with 59 people per 10,000 in comparison groups. But after a series of statistical manipulations, Avandia appeared to confer an increased risk. In principle, a proper statistical analysis can suggest an actual risk even though the raw numbers show a benefit. But in this case the criteria justifying such statistical manipulations were not met. In some of the trials, Avandia was given along with other drugs. Sometimes the nonAvandia group got placebo pills, while in other trials that group received another drug. And there were no common definitions. "Across the trials, there was no standard method for identifying or validating outcomes; events ... may have been missed or misclassified," Bruce Psaty and Curt Furberg wrote in an editorial accompanying the New England Journal report. "A few events either way might have changed the findings." More recently, epidemiologist Charles Hennekens and biostatistician David DeMets have pointed out that combining small studies in a metaanalysis is not a good substitute for a single trial sufficiently large to test a given question. "Metaanalyses can reduce the role of chance in the interpretation but may introduce bias and confounding," Hennekens and DeMets write in the Dec. 2 Journal of the American Medical Association. "Such results should be considered more as hypothesis formulating than as hypothesis testing." These concerns do not make clinical trials worthless, nor do they render science impotent. Some studies show dramatic effects that don't require sophisticated statistics to interpret. If the P value is 0.0001a hundredth of a percent chance of a flukethat is strong evidence, Goodman points out. Besides, most wellaccepted science is based not on any single study, but on studies that have been confirmed by repetition. Any one result may be likely to be wrong, but confidence rises quickly if that result is independently replicated. "Replication is vital," says statistician Juliet Shaffer, a lecturer emeritus at the University of California, Berkeley. And in medicine, she says, the need for replication is widely recognized. "But in the social sciences and behavioral sciences, replication is not common," she noted in San Diego in February at the annual meeting of the American Association for the Advancement of Science. "This is a sad situation." Bayes watch Such sad statistical situations suggest that the marriage of science and math may be desperately in need of counseling. Perhaps it could be provided by the Rev. Thomas Bayes. Most critics of standard statistics advocate the Bayesian approach to statistical reasoning, a methodology that derives from a theorem credited to Bayes, an 18th century English clergyman. His approach uses similar math, but requires the added twist of a "prior probability"in essence, an informed guess about the expected probability of something in advance of the study. Often this prior probability is more than a mere guessit could be based, for instance, on previous studies. Bayesian math seems baffling at first, even to many scientists, but it basically just reflects the need to include previous knowledge when drawing conclusions from new observations. To infer the odds that a barking dog is hungry, for instance, it is not enough to know how often the dog barks when wellfed. You also need to know how often it eats in order to calculate the prior probability of being hungry. Bayesian math combines a prior probability with observed data to produce an estimate of the likelihood of the hunger hypothesis. "A scientific hypothesis cannot be properly assessed solely by reference to the observational data," but only by viewing the data in light of prior belief in the hypothesis, wrote George Diamond and Sanjay Kaul of UCLA's School of Medicine in 2004 in the Journal of the American College of Cardiology. "Bayes' theorem is ... a logically consistent, mathematically valid, and intuitive way to draw inferences about the hypothesis." With the increasing availability of computer power to perform its complex calculations, the Bayesian approach has become more widely applied in medicine and other fields in recent years. In many reallife contexts, Bayesian methods do produce the best answers to important questions. In medical diagnoses, for instance, the likelihood that a test for a disease is correct depends on the prevalence of the disease in the population, a factor that Bayesian math would take into account. But Bayesian methods introduce a confusion into the actual meaning of the mathematical concept of "probability" in the real world. Standard or "frequentist" statistics treat probabilities as objective realities; Bayesians treat probabilities as "degrees of belief" based in part on a personal assessment or subjective decision about what to include in the calculation. That's a tough placebo to swallow for scientists wedded to the "objective" ideal of standard statistics. "Subjective prior beliefs are anathema to the frequentist, who relies instead on a series of ad hoc algorithms that maintain the facade of scientific objectivity," Diamond and Kaul wrote. Conflict between frequentists and Bayesians has been ongoing for two centuries. So science's marriage to mathematics seems to entail some irreconcilable differences. Whether the future holds a fruitful reconciliation or an ugly separation may depend on forging a shared understanding of probability. "What does probability mean in real life?" the statistician David Salsburg asked in his 2001 book The Lady Tasting Tea. "This problem is still unsolved, and ... if it remains unsolved, the whole of the statistical approach to science may come crashing down from the weight of its own inconsistencies." Statistical criticisms For decades, experts have noted serious flaws in the standard statistical approach. Some typical comments: "Despite the awesome preeminence this method has attained ... it is based upon a fundamental misunderstanding of the nature of rational inference, and is seldom if ever appropriate to the aims of scientific research."William Rozeboom, 1960 "What used to be called judgment is now called prejudice, and what used to be called prejudice is now called a null hypothesis.... It is dangerous nonsense."A. W. F. Edwards, 1972 "Huge sums of money are spent annually on research that is seriously flawed through the use of inappropriate designs, unrepresentative samples, small samples, incorrect methods of analysis, and faulty interpretation."D.G. Altman, 1994 "The methods of statistical inference in current use ... have contributed to a widespread misperception ... that statistical methods can provide a number that by itself reflects a probability of reaching erroneous conclusions. This belief has damaged the quality of scientific reasoning and discourse."Steven Goodman, 1999 "Many investigators do not know what our most cherished, and ubiquitous, research desideratum'statistical significance'really means. This ... signals an educational failure of the first order."Raymond Hubbard and J. Scott Armstrong, 2006 "These classical methods [of significance testing] are in fact intellectually quite indefensible and do not deserve their social success."Colin Howson and Peter Urbach, 2006 "A finding of 'statistical' significance ... is on its own almost valueless, a meaningless parlor game."Stephen Ziliak and Deirdre McCloskey, 2008 For references and more information, see http://bit.ly/aq1x28 

Reader Opinion