When is a healthcare intervention actually ‘worth it’ to a patient? – The smallest worthwhile effect

When seeking to identify if healthcare intervention was successful, we typically look for statistical significance in a change to prove to us that the effect wasn’t down to chance. This very good and is most widely used, but simply statistical significance (ie. p < 0.05) is not enough to actually say an intervention was worthwhile. In a study with a lot of participants, it is possible to find a very small change to be statistically significant, and that change may actually have no importance to a clinician or a patient. Let’s use an example, if there is a new drug for blood pressure and it is studied and the researchers find that it reduces blood pressure, with statistical significance, by 1mmHg, this effect does not mean anything clinically, thus, it likely isn’t an intervention you would recommend. However, if another drug reduces blood pressure by 10mmHg and is statistically significant, most people would also consider that change clinically important. That 10mmHg change may be the difference between someone being hypertensive (>140 / 90mmHg) and moving to being in the high-normal category (130-139 / 85 – 89mmHg), which has important reductions in risk of a cardiovascular events. But how much of an effect means something is clinically important, or, what is the minimal clinically important difference (MCID)?

The MCID was defined in 1989 by Jaeschke et al. as “the smallest difference in score in the domain of interest which patients perceive as beneficial and which would mandate, in the absence of troublesome side effects and excessive cost, a change in the patient’s management” (1). In my mind, the key part of this statement is that it is the smallest change which patients perceive as being beneficial, because, at the end of the day, if we as clinicians are giving interventions which are not likely to make the patient feel better, what is the point of administering them?

The MCID has been defined for many different things, from the six minute walk test (6MWT) to ratings of pain, but the methods used to determine the MCID raise some questions about whether they are truly patient centred. Let’s take pain for example, pain can be measured on many different scales but lets use the 11-point numerical rating scale of pain (NRS-P) which goes from no pain to worst pain imaginable. The way the MCID is calculated is by putting patients through an intervention and asking their pain on the 11 point NRS-P at the beginning and end of the treatment, to determine the change which occurred. Then, at the end of the treatment they are also asked how they feel overall on a global rating scale, ie. do they feel the same, slightly worse, much worse or slightly better, or much better. The responses on both these scales are compared, and the change in scores which most closely correlates with feeling ‘slightly better’ or ‘slightly worse’ is determined to be the MCID.

This sounds pretty good on the surface, but it is the researchers or clinicians who decide that patient’s only have to feel ‘slightly better’ in order to see a clinically important change. But what if patient’s want to feel much better and only ‘slightly better’ wasn’t actually worth it for the treatment they went through? These are the major limitations of the MCID, it doesn’t factor in either the patient’s view on what amount of change is important, nor the costs, risks and inconveniences of the treatment which produces the effect.

In 2009, Ferreira et al. (2) coined the term, ‘smallest worthwhile effect’ which is intervention specific and factors in the costs, risks and inconveniences of the intervention. To demonstrate the importance of having intervention specific measures, let’s imagine two patients were to undergo different treatments for their pain, one had major surgery and the other attended a series of educational sessions with a clinician, if the MCID was a reduction in pain by 2 points on the 11-point NRS-P, and both patients achieved a reduction of 2.5 points, would both patient’s be equally happy? Would they both consider that they saw a clinically important change? Probably not, because the surgery has much more severe costs, risks and inconveniences.

When calculating the smallest worthwhile effect, patient’s are explained the intervention, then asked what effect, over and above the effect of no treatment, that would make the intervention worthwhile to them, considering the costs, risks and inconveniences. They are then asked by the clinician, “what if that effect was 0.5 points less? would that still be worthwhile?” and this is repeated until they don’t consider the treatment worthwhile, and thus, the smallest worthwhile effect is established for that treatment. Another aspect of the smallest worthwhile effect is that the hypothetical effect size patients are considering is in addition to the natural history of the condition. For low back pain, most people see a 30% reduction in pain over the first few weeks of a flare up, thus, the effect of any intervention must be over and above this natural recovery or regression to the mean.

The current research (3 & 4) on the smallest worthwhile effect for pain has looked at several different physiotherapy interventions and non-steroidal antiinflammatory drugs (NSAIDS) in low back pain, and there are definitely many treatments beyond this, thus, for my Honours year I’m conducting a study looking to identify the smallest worthwhile effects for different interventions for low back pain.

So why is this important?

The value in knowing the smallest worthwhile effect of an intervention is that it enables clinicians to know, on average, what effect patients consider worthwhile from different treatments. From there, they are able to identify whether those treatments are indeed able to produce that effect. For example, if a patient believes that in order to take a drug for their pain, they would need a 3 point (on an 11-point NRS) reduction in pain in order to make that treatment worthwhile compared to no treatment, considering the side effects of the medication. But the clinician knows that the best evidence shows that the drug in question typically only reduces pain intensity by 1 point, so they may recommend other treatments which have a more favourable cost-benefit profile, or, a smallest worthwhile effect which aligns with the efficacy of the treatment in question more closely. Ultimately, it is crucial that we in research ask patients what they think of the interventions that we are applying to them, and get their input into whether a treatment actually ‘works’ and is worthwhile from their perspective.

References:

(1). Jaeschke R, Singer J, Guyatt GH. Measurement of health status. Ascertaining the minimal clinically important difference. Control Clin Trials 1989;10:407-15.

(2). Ferreira ML, Ferreira PH, Herbert RD, Latimer J. People with low back pain typically need to feel ‘much better’ to consider intervention worthwhile: an observational study. Aust J Physiother 2009;55:123-7.

(3). Ferreira ML, Herbert RD, Ferreira PH, et al. The smallest worthwhile effect of nonsteroidal anti-inflammatory drugs and physiotherapy for chronic low back pain: a benefit-harm trade-off study. Journal of Clinical Epidemiology 2013;66:1397-404.

(4). Christiansen DH, de Vos Andersen NB, Poulsen PH, Ostelo RW. The smallest worthwhile effect of primary care physiotherapy did not differ across musculoskeletal pain sites. J Clin Epidemiol 2018;101:44-52.

P.S. this was to help me solidify my topic in my own head, ensuring I understand it, if you’re interested in it, hit me up