It is indeed nice to have three laws - recall Newton, and perhaps more famously, Asimov’s three laws of Robotics. It's pithy, epigrammatic and attempts to ground us. The laws teach us humility even as research and innovation reach ever upwards. So what does Mukherjee mean by this third law?
He starts off with his fellowship in Oncology. The Human Genome Project had its great moment, and though the term ‘precision medicine’ had not even made it past a synapse, the oncology world was dealing with precisely engineered, monoclonal antibodies with fabulously unheard of outcomes. Following on the success of the VEGF inhibitor Imatinib (Gleevec), there was another ‘cousin’ that Mukherjee and his cohort of fellows were seeing used. The fellows saw dramatic positive results in their patients - but somehow, paradoxically, and in stark contrast, the actual clinical trial showed little benefit. How did this happen? Selection bias struck the fellows. They were handed patients from graduating fellows who had the most ‘educational value’ aka the patients doing well. On the other hand, the patients who did not do well were handed back to the attending physician. Like the parable of the broken window, one has to be careful about what is not seen. The patients who are lost to follow up may be lost because they are too sick to come back. Though this was not a perfect experiment that Mukherjee describes - it does illustrate a common enough bias - bring it up the next time an experienced colleague starts off with an ‘in my experience…’ to counter your meticulously gathered data. Vinay Prasad explains the responder bias, all too common in oncology, in a nice tweetorial here.
Another example cited by Mukherjee is the radical mastectomy - Halsted’s procedure, championed by a famous Hopkins surgeon in 1882. In a clever piece of naming, the radical makes one imagine that the roots of cancer have been eradicated, and it took nearly a century before the futility of the approach was revealed by randomized controlled trial. A clever study from Giovannucci showed an example of recall bias. In women with breast cancer, the diet history taken after the cancer diagnosis seemed to suggest that high fat intake was associated with cancer. However, a dietary history taken a decade prior to the diagnosis of cancer, from the same women, showed no similar association. The cancer diagnosis creates false memories. Food questionnaires, forgive the pun, should be taken with a pinch of salt.
But these are all epidemiological studies. Surely randomized controlled trials are not biased. The entire rationale is to prevent these kind of confounding, selection bias - or information biases to creep in. However, the trial methods do count. Though Mukherjee doesn’t go in to those aspects, blinding, allocation concealment, proper randomization are but some additional features of trial quality which can bias even the best laid plans. Check out the Cochrane risk of bias tool, which explains a few of these in detail. There is more to this of course. Should one change practice on the basis of a single small trial? Enter publication bais - or the shelf drawer (full of unpublished negative trials) bias. How about the more important issue of generalizability, or external validity? Does a psychology study of WEIRD individuals apply to all humanity? Surely not. Men and women are biologically different - but not for all conditions and surely not in response to all therapies. The need to do trials in all subpopulations is sometimes carried too far, however, in denying effective therapies to dialysis patients. Just because a trial has been done in the general population doesn’t mean the therapy will not work in dialysis patients. Generalizability should not be an excuse to practice renalism.
So, Mukherjee wants us to be bias hunters, on the look out for biases in every study. Eternal vigilance is always necessary.
Summary by Swapnil Hiremath, Ottawa