By Michael D. Cabana, MD, MPH
In the world of clinical trials, reproducibility (or consistency) of results across different clinical trials improves clinicians’ confidence in an intervention (Hill, 1965). However, when reviewing the evidence for a probiotic or prebiotic supplement, the results are sometimes conflicting. One study claims an intervention may work. Another study claims that an intervention may not work. So how does the clinician deal with this situation?
To know how much confidence to place in any claim of benefit, clinicians need to consider the totality of the evidence and the quality of the studies. One tool is the systematic review process, which in an unbiased manner searches for all studies for a particular intervention, and when possible, combines results into a meta-analysis. The ‘summary’ of these data point to either an effect or no effect. The best way to combine data is using an individual patient-data meta-analysis (IPDMA). In addition, a clinician should determine whether the clinical trial is an effectiveness study or an efficacy study (Singal 2014).
Efficacy or Effectiveness?
Efficacy studies ask, “does the intervention work in a defined (usually an “ideal”) setting?” In general, the inclusion criteria for study participants will be very selective. Patient adherence tends to be closely monitored. The clinicians conducting the trial may be specially trained in the intervention and its application. The intervention occurs in an ideal setting and the risk of other confounding interventions (e.g., unusual diets, concurrent treatments) will be limited.
On the other hand, effectiveness studies ask, “Does the intervention work in a real-world setting?” The inclusion criteria for study participants tends to be less selective. Patient adherence to the protocol is not necessarily strictly enforced. The clinicians conducting the trial tend to be representative of the typical physicians who would treat this condition. The intervention occurs in a more ‘real-world’ setting where the presence of other confounding factors may be present.
For example, two relatively recent studies both examined the effect of a probiotic intervention, L. reuteri DSM 17938 for the treatment infant colic. A study conducted by a team in Italy (Savino et al. 2010) noted that the intervention reduced colic symptoms; however, the study conducted by a team in Australia (Sung et al. 2014) showed no effect on colic.
Why the different results? In the Italian study, all the infants were breastfed. In addition, the breastfeeding mothers limited their dairy intake. The infants tended to be younger (mean age 4.4 weeks) and tended not to have other treatments for colic or gastrointestinal symptoms. In contrast, the infants in the Australian study were breastfed or formula fed. The infants were older (median age 7.4 weeks) and were more likely to have been exposed to other treatment for gastrointestinal symptoms (such as histamine-2 blocker or proton pump inhibitors). The infants were recruited from many different settings such as the emergency department.
Although both the Italian and the Australian study evaluated the same probiotic intervention for the same condition, the studies offer different information in terms of efficacy and effectiveness. Describing a study as either an “efficacy” study or an “effectiveness” study is not always dichotomous. Rather, these studies exist on a spectrum, from being more like an efficacy study versus more like an effectiveness study. In the example above, the Italian study had stricter criteria and fewer confounding factors. As a result, it would tend to be classified as an efficacy study. The Australian study enrolled infants with colic who were older and had a greater likelihood to be exposed to other interventions. This study would tend to be classified as more of an effectiveness study. The fact that the Australian study was a null study does not mean that the intervention was not effective in the ‘real world’. Rather, for the patients enrolled, the treatment was not effective when used in that particular setting and context. Perhaps you may encounter infants with colic who have feeding history and medical history more like the infants from the Italian study. Understanding the context of the studies helps identify those characteristics that may or may not apply to the infants with colic who you may treat in your clinic.
Which is better: Efficacy or Effectiveness?
When developing a new or experimental intervention, an efficacy study might be important to increase the likelihood of detecting a positive change. However, “real world” factors may make a difference in how a product is used. Perhaps an intervention might be inconvenient (due to multiple doses throughout the day) or unpalatable for the patient. Perhaps the dosing regimen is complicated and the primary care providers don’t apply the correct dosing for patients. In these cases, an effectiveness study might be a better guide to how useful the intervention will be in clinical practice.
As a final note, it can be tempting to simply read the abstract of a clinical trial to assess the results of a study. However, in many instances the crucial details of the study (e.g., how the study participants were selected, who was included or excluded, what type of clinical setting was used) are buried in the methods section of the study. Patient diet, exposure to other treatments and comorbid conditions are all common confounding factors encountered in trials evaluating supplements. When reading through the literature and understanding if a study is applicable to your practice, be sure to understand the full context and purpose of the study. “Was this study useful for determining clinical efficacy or clinical effectiveness?” is an important question for readers of probiotic and prebiotic clinical trials. Keeping this question in mind may help you better resolve what may appear to be inconsistency among clinical trials.