The Economist has a great article on why a lot of medical research is bogus. He used the same statistical methods that typical medical researchers used, and applied them to data of a Toronto hospital to come up with the following results:
PEOPLE born under the astrological sign of Leo are 15% more likely to be admitted to hospital with gastric bleeding than those born under the other 11 signs. Sagittarians are 38% more likely than others to land up there because of a broken arm. Those are the conclusions that many medical researchers would be forced to make from a set of data presented to the American Association for the Advancement of Science by Peter Austin of the Institute for Clinical Evaluative Sciences in Toronto. At least, they would be forced to draw them if they applied the lax statistical methods of their own work to the records of hospital admissions in Ontario, Canada, used by Dr Austin.
This is obvioulsy bogus. But the same methods are being used to show various connections by medical researchers all over. The basic issue boils down to statistical validity of drawing a conclusion from data available. The probability of Sagittarians being 38% more likely to break their arm is rather low, leading one to believe that this is a statistically significant conclusion. However, when you realize that Dr. Austin tried 24 different hypotheses (“Sagittarians are more likely to break their arm”, “Leos are more likely to break their arm”, “Sagittarians are more likely to get gastric bleeding”, “Leos are more likely to get gastric bleeding”, etc.), the probability that at least one or two of them will give results that seem pretty improbable increase significantly. This is usually not taken into consideration by the researchers.