October 20, 2019

Understanding science

Is diet soda more dangerous than sugar-sweetened beverages?

SMH

Read Time 5 minutes

It’s often repeated in the field of nutrition, particularly nutritional epidemiology, that randomized controlled clinical trials (RCTs) are the gold standard in science, but they’re not feasible for long-term dietary interventions.

Instead, researchers rely on observational epidemiology, such as prospective cohort studies (explained in more detail in Studying Studies: Part III) where investigators identify a group of subjects, track them and their behaviors over time, and determine if and when they developed an outcome of interest, and see whether a behavior (or an exposure to something) is associated with that outcome.

Want to learn more? Check out our article on replacing sugar with allulose and our Ask Me Anything podcast all about sugar and sugar substitutes.

These correlative results are often used to identify risk factors and impact public health policy.

Countless studies like these are published every year, but allow me to pick on a recent one, “Association Between Soft Drink Consumption and Mortality in 10 European Countries,” to give you some idea of how little I think they can provide in the way of reliable knowledge.

451,743 individuals were followed for an average of 16.4 years, the behaviors of interest tracked were the consumption of soft drinks in general, as well as sugar-sweetened beverages (SSBs) like Coca-Cola Classic, and artificially sweetened beverages (ASBs) like Diet Coke. The outcomes of interest in this case were all-cause mortality and cause-specific mortality.

So, what were the results?

Individuals who consumed two or more glasses (250 mL) per day of soft drinks (total; SSBs or ASBs) were 17% more likely to die at follow-up than those that consumed less than one glass per month (in the statistical parlance, hazard ratio [HR], 1.17; 95% CI, 1.11-1.22; p < 0.001).

Individuals who consumed two or more glasses (250 mL) per day of SSBs were 8% more likely to die at follow-up than those that consumed less than one glass per month (HR, 1.08; 95% CI, 1.01-1.16; p = 0.004).

Individuals who consumed two or more glasses (250 mL) per day of ASBs were 26% more likely to die at follow-up than those that consumed less than one glass per month (HR, 1.26; 95% CI, 1.16-1.35; p < 0.001).

As Andrew Jacobs in the New York Times reported, what really grabbed the headlines, was “the suggestion that drinking Diet Coke could be even more deadly than drinking Coca-Cola Classic.”

Let’s back up for a moment and ask a few questions about this study (that you should be asking of every single such non-randomized study, prospective cohort or otherwise).

How reliable are the data? In this case, how did the investigators determine whether an individual drank two or more glasses of ASBs per day over the course of more than 16 years? Consumption was assessed during one baseline visit using self-administered questionnaires (in other words, food frequency questionnaires, or FFQs). This is the same as asking you, reader, how many glasses of soft drinks you consume per day (or week, or month) and assuming that from now until the year 2035, this will not change. To repeat, directly from the investigators, “This study was also limited by a single assessment of soft drink consumption at baseline.” If you choose to stop reading this now, I won’t fault you. After all, it is a lovely Sunday morning, and this pretty much tells you all you need to know about this study. I only wish the newspapers would report facts like this one in the subject line to spare people from reading further. That said, it’s probably worth at least skimming what follows, given the ubiquity of epidemiology in health news.

What are the absolute risks? While this was not reported (surprisingly, actually), we can get a general idea from the raw data provided. For example, what was the 26% associated increase in all-cause mortality in heavy ASB consumers versus those that consumed less than a glass per month based on? Out of the 225,543 individuals who reported less than a glass per month of ASBs at baseline, there were 21,032 deaths at follow-up. In other words, 9.3% died. For the 6,292 individuals that reportedly consumed two or more glasses of ASBs at baseline, there were 737 deaths at follow-up. In this group, 11.7% died. This works out to a relative risk increase of 26% (this is the raw number which coincidentally is the same as the multivariable model, see below) and an absolute risk increase of 2.4%. For total soft drinks and SSBs, the raw numbers look similar. Remember, you can talk all you want about relative risk, but always leave at least one eye on absolute risk for some perspective.

Were there any confounding variables, and were the investigators able to control for all of them? This is somewhat of a trick question because in the vast majority of observational epidemiological studies, there are many confounding variables, and the only confounding variables that are controlled for are the ones the investigators considered and collected (and even these can be inaccurate measures). In the case of the current study, heavy consumers of soft drinks were younger, more likely to be current smokers at baseline, less likely to have had higher education, and more likely to be physically active at baseline. Using multivariable models, the investigators adjusted for alcohol consumption, smoking, body mass index, physical activity, education status, menopausal status, use of hormone replacement therapy, and dietary intakes of total energy, red and processed meats, coffee, fruit and vegetable juices, and fruits and vegetables. Again, all of these variables were collected once, at baseline. A fair question to ask is how reliable these variables are in terms of the initial accuracy of the self-reported information and the extrapolation of this data over 16 years.

Can reverse causation explain the association(s)? In this case, perhaps individuals who were unhealthy (e.g., overweight, obese, prediabetic) made the switch from SSBs to ASBs prior to the baseline measurement. This is the healthy- or unhealthy-user bias rearing its head (for more on the healthy-user bias, check out Studying Studies: Part III). In other words, does drinking ASBs lead to poor health or does poor health lead to drinking ASBs? The answer could be either, both, or neither, but the most important takeaway in this case is that a study like this one cannot answer this question. In other words, it really cannot establish cause and effect (except in rare circumstances like cigarette smoking and lung cancer, and is in the ballpark of the “Bradford Hill Criteria”).

Is drinking diet soda going to lead you to an early grave? This study does not answer that question nor does it bring us any closer to answering that question. Nor will “more research” on the long-term effects of consuming artificial sweeteners if “more research” means more observational epidemiology. We can apply the same point here that one of my mentors made to me in explaining meta-analyses: “a hundred sow’s ears makes not a pearl necklace.” Piling up inadequate studies that can’t determine cause and effect while bemoaning the expensiveness of RCTs suggests to me we need to think about how we’re allocating our resources. Yes, long-term clinical trials are expensive, difficult to conduct, and will have limitations. But there’s one very important distinction between RCTs and the vast majority of observational epidemiological studies on diet and health: at least RCTs are not dead on arrival.

– Peter

Disclaimer: This blog is for general informational purposes only and does not constitute the practice of medicine, nursing or other professional health care services, including the giving of medical advice, and no doctor/patient relationship is formed. The use of information on this blog or materials linked from this blog is at the user's own risk. The content of this blog is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Users should not disregard, or delay in obtaining, medical advice for any medical condition they may have, and should seek the assistance of their health care professionals for any such conditions.
  1. Your conclusions make sense. We, in medicine, are enamored with “evidence based medicine”. Unfortunately too often we fail to examine the veracity of the information and methodology which led to the “evidence based” recommendations or conclusions. And, the relatively recent focus on meta analysis sometimes exacerbates this issue, lending credence to the conclusions on the basis of a large “n”, rather than the details of the methodologies which may be flawed. The truth is that epidemiological studies applied to nutritional impact are rarely useful. In effect, little better than the unsubstantiated claims in the homeopathic nutritional industry. We default to these types of studies because it is so difficult to get adequate funding to do proper prospective controlled studies.

  2. Hi,

    You said that there could be the potential for reverse-causation. The authors of the study “excluded participants were those who reported cancer, heart disease, stroke, or diabetes at baseline”. This would help reduce that potential.

    Also, you say smoking was proven to cause cancer but no RCT was conducted to prove that. So, just because a study is not an RCT doesn’t necessarily mean it is dead on arrival.

    Lastly, in the absence of sufficient resources to conduct an RCT lasting decades, observational research is the next best thing. It is arguably better than doing nothing.

    Thanks,

    Andrew

Facebook icon Twitter icon Instagram icon Pinterest icon Google+ icon YouTube icon LinkedIn icon Contact icon