Wednesday, February 11, 2009

Pseudo-science Part II

A friend of mine brings two articles to my attention this evening.

The first one concerns Dr. Andrew Wakefield, who famously produced a study on the MMR vaccine demonstrating a possible link with Autism. The result of this study was frightened parents who refused to allow the vaccine, an increase in measles followed, as well as a rise in death from this disease.

Article Here: http://www.timesonline.co.uk/tol/life_and_style/health/article5683671.ece

It turns out that the doctor may have falsified some data on the 12 patients who took part in the study. Now, I have not verified this article, although it does site its own sources for those who wish to check the claim. I personally have a huge issue with basing conclusions of such a staggering nature on just 12 patients... that's taking the art of lying with statistics to an extreme that is... Well, it gives those who lie with statistics a bad name. The unjustified panic this study caused is perfect grounds for a serious ethical review. At a minimum.

The second article reports that "nightlights do not cause nearsightedness".

Article Here: http://www.personalmd.com/news/n0310114743.shtml

This article explains that the original findings, that nightlights cause nearsightedness, was incorrect. It turns out, again I haven't checked the findings for myself and am just going off what I read, that more often than not, the parents of the children were myopic. As the friend who pointed this article out to me said (paraphrased here), "nightlights don't cause nearsightedness in childred, nearsighted parents cause nightlights!"

This article provides a perfect example of confirmation bias. That is, your idea, with the backing of a tiny bit of data, all of a sudden grows in your mind to the exclusion of all other data and possibilities as you focus on that "ah ha!" moment. Ah the "ah ha!" moment. That one rears its ugly head in accident investigation rather often as well.

There is another article, recently published in the journal of the Human Factors and Ergonomics Society, which is even more egregious in its committing of this error. I will not go further into this for professional reasons other than to say A) I'm a member of HFES, and B) the topic of the article was confirmation bias.

So what is the takeaway about pseudo-science this time? Did we learn that some scientists are blatantly incapable of doing real research? Perhaps they are blinded by that amazing "ah ha!" moment? Hey, I've been there! It's darned hard to avoid these traps if you are not mentally and emotionally prepared to search for them. Especially since the first person to suspect is always yourself! Who wants to criticize themselves? Yes, perhaps we can say we learned these things. Perhaps we can righteously hammer these scientists for either unethical behavior in the first article (which he/they were in my opinion) or simply ignorant/intellectually dishonest in the second.

A small aside here. Why didn't the people responsible for peer review catch this? It's their JOB after all.

But let us look at a different take away. Both articles found evidence of something (albeit in one case the statistical significance is highly questionable). Yet that evidence indicated their hypotheses should be rejected (good statistics speak there). Well, okay. So what?

Consider this. If they had published the failure of their studies (or been more rigorous in searching for alternate hypotheses/confounds in those studies) and published those results instead, would the studies have been any less valuable? I would argue that, especially in the case of the first article, they would have been just as impactful and just as valuable! Perhaps more so. WE LEARN FROM OUR MISTAKES!

The second article, if this was a topic I was enamored with (which I am not), would have provided a great topic for further research. "We looked at the kids and nightlights, but found that the parents caused the nightlights. Let's go look at the parents now!"

These articles both highlight poor scientific methodology and a lack of scientific and intellectual humility that boggles the mind. Yet out of the dust we pull a lesson on why it is vital to publish our failures as well as our successes. We learn in either case. Since science is the pursuit of knowledge... I believe you all get my point. Comments?

Oh! One more thing. My friend who shot me these articles? Follow his blog HERE:
http://brertiger.blogspot.com

2 comments:

  1. The thing that gets to me most about Wakefield's study is that it was a bad study even /before/ this business about the falsified data came to light. It covered TWELVE children. That is not what you might call a statistically significant sample size!

    And yet, look at the impact that it had. Parents panicked about getting the MMR vaccine to the extent that cases of measles went from 56 to 1348 in 10 years! This is worse than merely "bad science". This is positively criminal.

    ReplyDelete
  2. I did some follow through research on this in an attempt to come up with the method Wakefield used to convince an IRB that 12 was a significant sample... I failed.

    The thing that struck me most though, was that all the people out there who support this study do it because the "believe" the results are correct. They "believe" that vaccines are bad for kids. They have no interest in science. Science is irrelevant because they "believe". Ugh, makes me sick.

    ReplyDelete