​​​​​​​How robust are your results? : A closer look at Sensitivity Analyses

Credibility of the results of a clinical trial depends on the validity of assumptions made and methods of analysis used to test the assumptions. Often times there is a concern about the interpretation of results of a study and if they will change depending upon change in definition of outcome, analytical method, protocol deviations, missing data, outliers etc. These questions can be answered using a Sensitivity Analysis (SA).

Simply put, a sensitivity analysis is a different way of looking and analyzing data by altering the assumptions or key input variables which may lead to a different final result or conclusions.

Let's look at a few examples-

Example 1

All medical studies make assumptions. One of the most common is that the thing you are measuring is the thing you are measuring. For example, a study looking at the link between body mass index (BMI) and heart attack assumes that BMI is a valid measure of adiposity. A sensitivity analysis in that study might examine other adiposity measures (waist circumference, for example) to show that the inference made (that greater adiposity leads to heart attack) is robust to other measures of adiposity. This helps the study “hang together” and supports broader conclusions.

Example 2

A study published in NEJM in 2007 found that influenza vaccination was associated with significant reductions in the risk of hospitalization for pneumonia or influenza and in the risk of death among community-dwelling elderly persons.

The question- Was there an effect of an unmeasured confounder (e.g. patient's functional status) that might have influenced the estimates of vaccine effectiveness? The concern here was that persons with poor functional status were less likely to be vaccinated but more likely to be get the outcome (i.e. pneumonia, hospitalization or death). This would have overestimated vaccine effectiveness in the main analysis. Unfortunately, they did not have a measure of the functional status, so they could not adjust for it.

Solution- Authors performed a SA to determine how strong the effect of an unmeasured confounder would have to be, in order to abolish the observed vaccine effectiveness in the main analysis. SA showed that although still significant,  estimates of vaccine effectiveness were incrementally lower with increasing prevalence of the confounder (impaired functional status) at various strengths of outcome of interest. In the most extreme scenario, at a prevalence of 60% and an increased risk by a factor of three of an outcome, the estimates of vaccine effectiveness were reduced to 7% for hospitalization and 33% for death.

In the context of the current NEJM study assessing the risk of adult ESRD in patients with childhood CKD, the concern was that presence of hematuria and duration of follow up could be potentially biasing the results thereby overestimating the risk of ESRD. Persistent asymptomatic isolated microscopic hematuria among adolescents and young adults is a predictive risk marker of future ESRD, attributable mostly to glomerular disease as the primary etiology. Early detection of ESRD with close long term follow-up of such patients may result in a ‘surveillance bias’ in the results thereby suggesting increased risk of ESRD. Nevertheless, early detection of chronic kidney disease likely would have triggered therapeutic interventions with the potential of slowing progression to ESRD, decreasing rather than increasing the risk of ESRD.

To tackle this problem, the authors performed a SA. The exclusion of persons with microhematuria did not materially change the association between a history of childhood kidney disease and an increased risk of ESRD in adulthood. Similarly an analysis that stratified participants with a history of childhood kidney disease according to duration of follow-up showed attenuation of the increased risk of ESRD with increasing duration of follow-up.

Picture1.png

Thus, SA plays an important role in checking the robustness of the conclusions of a study or clinical trial and help strengthen the credibility of the findings. The result of a brief survey by Thabane et al showed that the use of SA is very low and that despite their importance, SA are under used in practice.  They are often used in health economics research, for example, in conducting cost-effectiveness analyses but need more utilization in other areas of health and medical research.

Commentary written by Manasi Bapat Renal Fellow, Mount Sinai, NY and NSMC intern, Class of 2018