Health Kismet
Why You Can’t Always Trust Medical Research Results
Clinical_research_questions

If you open the “Health” section of any online or paper publication, you’re bound to find plenty of stories saying “New study suggests X causes Y”, or something similar. The catch is usually an unusual correlation that grabs your attention.

The articles usually give the impression that there’s a linear relationship between the different variables being studied, and if you just ate more of some food, or did more of some behavior, then you’ll receive similar results. 

If only it were that easy, right?

The problem is that new studies showing bizarre correlations are typically done using small sample sizes, and as more studies are done in larger numbers, the observed differences disappear and the original effect becomes random. However, the march towards randomness never makes the headlines.

Alex Tabarrok gave a great example today in Marginal Revolution. In 1992 an article published in the journal Science found that a gene variant of an enzyme called ACE significantly modified someone’s chances of having a heart attack. Woohoo! And, the article was published in Science, the journal of all journals. 

It must be fact then, right?

As it turns out, not quite.

The study was done on 500 people, but as sample sizes grew larger the effects began to diffuse, until eventually there was no effect at all.

This phenomenon is not unique. Look at the graph below. Genetic variants of 8 different enzymes that had a statistically significant effect on diseases in small samples had no effects when performed on larger ones.

Natgen_funnel-plot_cropped

Now think about all the articles you’ve read stating “New study suggests pistacchios might cure Cancer” or some other vibe. Huh.

This effect is more robust when you consider that most research reports no significant results at all….but it doesn’t get published as often. No news is boring. 

One report even suggested that the majority of published medical research is false. Sample sizes must be small, and Bayesian reasoning would predict that the majority of “findings” are actually false positives.

So the next time you read something about the latest trend, or finding that you can cure a disease with some food, take it with a grain of salt. Researchers are probably just thinking too small.

  1. alter-cation reblogged this from healthkismet and added:
    bless this post
  2. healthkismet posted this