Invariably I get asked the question, “If carbohydrates are so bad, why did [so-and-so] lose weight on the [such-and-such] diet?”, where “such-and-such” diet is not a “low-carb” diet. Obviously, this is an important question and a pretty complex one.
There are several layers to this and, frankly, there are some things we can’t fully explain – I’ll always acknowledge this. That said, many of the successes (at least weight-wise, though hopefully by now you realize there is much more to health than just body composition) of popular diets can be explained by a few simple observations. Above is a list of this year’s most “popular” diets, according to Consumer Reports. Popularity, of course, was determined by a number of factors, including compliance with current government recommendations (sorry Atkins), number of people who have tried the diet, and reported success on the diets. So it’s actually quite misleading when the report says it’s reporting on the “most effective diets.”
Keep in mind the average American (i.e., at baseline) consumes about 2,500 to 2,700 calories per day (different sources, from NHANES to USDA will give slightly different numbers for this, but this range is about correct), of which about 450 grams (about 1,750 calories worth or about 65% of total caloric intake) comes from carbohydrates. You can argue that those who are overweight probably consume an even greater amount of carbohydrates. But for the purpose of simplicity, let’s assume even the folks who go on these diets are consuming the national average of approximately 450 grams of carbohydrate per day (in compliance with governmental recommendations, as a percent of overall intake).
Take a look again at the figure below, which shows you how many calories folks are consuming on each diet and, more importantly, where those calories come from. [It’s not actually clear to me how Consumer Reports was able to figure out exactly how much folks eat, beyond self-reporting or diet-book recommendation, mind you. In other words, these numbers could actually be wrong, but it’s what we’ve got for now.]
[Note: in a more recent (2017, consumerreports.org) analysis, the following categories were included: initial weight loss, maintenance, calorie awareness, food variety, fruits and vegetables, and exercise.]
You’ll note that people on these diets, including the strictest low-fat high-carb diets, significantly reduce their total amount of carbohydrates (therefore reducing the amount of insulin they secrete). Even the Ornish diet, which is the most restrictive diet with respect to fat and most liberal with respect to carbohydrates, still reduces carbohydrate intake by about 40% from what people were likely eating pre-diet.
The reason, I believe, most of these diets have some efficacy – at least in the short-term – is that they all reduce sugar and highly refined carbohydrate intake, either explicitly or implicitly. No one on the Ornish Diet or Jenny Craig Diet is eating candy bars and potato chips, at least not if they are adhering to it. Hence, these diet plans do “clean up” the eating habits of most folks.
Someone made a great point in response to my post on why fruits and vegetables are not actually necessary for good health. The point was, essentially, that telling people to eat 5-6 servings per day of fruits and vegetables can hopefully drive a beneficial substitution effect. If you tell someone who eats Twinkies, potato chips, and candy bars all day to eat more fruit (and they do), you’ve almost guaranteed an improvement in their health if they eat bananas and apples instead of the aforementioned junk food. That doesn’t mean bananas and apples are “good for you” – it just means they are less “bad for you.” Here’s the kicker, though. We’re led to believe that the reason such folks get leaner and more healthy is because they are eating more fruits or more vegetables or more grains or more [fill-in-the-blank], rather than because they eliminated the most egregious offenders from their diet.
I can’t really overstate this point. I have no intention of engaging in a battle with proponents of plant-based eating or no-saturated-fat diets. I’m reasonably confident that the proponents of these diets are good people who really want to help others and have nothing but the best intentions. But that doesn’t mean we can or should overlook the errors being made in drawing their conclusions. Many people do very well on plant-based (e.g., vegan) diets, for sure. But why are they doing well? That is the single most important question we should be asking ourselves. Why did the people in the China Study who ate more plants do better than those who ate more animals (assuming they did)? Parenthetically, if you actually want the answer to this question, beyond my peripheral address, below, please read Denise Minger’s categorically brilliant analysis of the study.
I know a lot of people who eat this way and, I’ve got to say, these folks do not eat a lot of sugar or a lot of highly refined carbohydrates. In fact, many are so conscientious of their health that they actually have far better carb-habits than most (e.g., which breads they choose, which fruits and vegetables they eat).
While I do plan to write an entire post on this topic of what one can and cannot conclude from an experiment, I do want to at least make the point here: The biggest single problem with nutrition “science” is that cause and effect are rarely linked correctly. Stated another way, it’s one thing to observe an outcome, but it’s quite another to conclude the actual cause of that outcome.
Let me digress for a moment to provide an important example of this phenomenon. One of the most prominent figures in the diet/nutrition space is Dr. Dean Ornish. I don’t know Dr. Ornish personally, and I can only assume that he is a profoundly caring physician who has dedicated his life to helping people live better lives. He is nationally, and internationally, regarded for his efforts.
One of the reasons for his prominence, I believe, is the work he did in the early 1990’s on lifestyle modification and the impact it can have on reversing coronary artery (i.e., heart) disease. In particular, Dr. Ornish was the principle investigator on a trial published in the journal The Lancet in 1990. An abstract of the paper can be found here. But as always, I STRONGLY encourage folks with access (or folks who are willing to purchase it) to read the paper in its entirety. For people who don’t want to read the study completely, or who may not have much experience reading clinical papers, I want to devote some time to digging into this paper. Why? Well, for starters, reading abstracts, hearing CNN headlines, or reading about studies in the NY Times doesn’t actually give you enough information to really understand if the results are applicable to you. Beyond this reason, and let me be uncharacteristically blunt, just because a study is published in a medical journal it does not imply that is worth the paper it is printed on. My mentor at the NIH, Dr. Steve Rosenberg, once told me that a great number of published studies are never again cited (I forget the exact number, but it was staggering, over 50%). Translation: whatever they published was of such little value that no one ever made reference to it again.
I am, to be clear, not implying this is the case for this trial, but I want you to understand why it’s important to read papers fully.
This trial, The Lifestyle Heart Trial, prospectively randomized a group of not-so-healthy patients into two treatment groups: the control group and the experimental group (or what we’d call the “treatment” or “intervention” group).
The experimental group (22 patients) was asked to adhere to the following changes for one year:
- Change their diet to a low-fat vegetarian diet (10% fat, though obviously no animal fat; 15-20% protein; 70-75% complex carbohydrates) with several other restrictions (e.g., no sugar, flour, or refined carbohydrates; limited alcohol; no caffeine; limited salt; limited cholesterol intake; no egg yolks)
- Smoking cessation
- Exercise regimen (minimum of 3 hours per week, at minimum of 30 minutes per session)
- Stress management (e.g., meditation, progressive relaxation, breathing modification)
- Join social support groups for help with adherence (twice weekly)
The control group (19 patients), obviously, remained under “usual-care” (i.e., no change in lifestyle).
One aspect of this trial that made the results particularly interesting was the use of angiography to actually measure and document the coronary artery lesions (i.e., blockages in the coronary arteries) in the patients before and after the lifestyle interventions. The study was not powered to measure “hard” outcomes (e.g., heart attacks, strokes, cancer, death), so the use of blood markers, physical parameters, and angiography were the best proxies for a reduction in disease risk. In other words, there were not enough subjects in the study to determine a difference in these “hard” outcomes, so we can’t make a conclusion about such events, only the changes in “soft” outcomes. I’m not discounting soft outcomes, only pointing out the distinction for folks not familiar with them.
So what happened after a year of intervention versus no intervention?
First off, and perhaps most importantly from the standpoint of drawing conclusions, compliance was reported to be excellent and the differences between the groups were statistically significant on every metric, except total average caloric intake. In other words, for every intended difference between the groups a difference existed, except that on average they ate the same number of calories (though obviously from very different sources), which was not intended to be different as both groups were permitted to eat ad libitum – meaning as much as they wanted.
Who was “healthier” at the end of a year? The table below shows the changes in both groups. If you want a quick primer on p-values, this is as good a time as any to get one. These tables (i.e., results tables) are a bit cumbersome if you’re not used to looking at them, so let me walk you through one row in detail. Let’s look at HDL cholesterol concentration. In the experimental group, HDL-C fell slightly from 1.00 +/- 0.26 mM (39 +/- 10 mg/dl for Yankees like me) to 0.97 +/- 0.40 mM (38 +/- 15 mg/dl), while it slightly fell from 1.35 +/- 0.52 mM (52 +/- 20 mg/dl) to 1.31 +/- 0.38 mM (51 +/- 15 mg/dl) for the control (i.e., no-intervention) group. It’s hard to tell if this change was statistically significant by inspection, so you glance at the p-value which tells you it was not. (To be exact, the p-value of 0.8316 tells you there is an 83% chance that this difference was random – as a general rule we don’t consider a difference to be statistically significant — meaning we’re going to assume it wasn’t just a chance fluctuation, the roll of the dice — until the p-value is below 0.05, and ideally below 0.01).
Take a moment to look over the rest of the table (or just skip reading it since I’m going to keep talking about it anyway).
What else was not significantly changed?
- Triglyceride level
- Apoprotein A-1 (not surprising, I guess, since HDL particles carry the bulk of apo A1)
- Blood pressure, both systolic (“top number”) and diastolic (“bottom number”)
What was significantly changed?
- Total cholesterol concentration (down in both groups, but significantly more in the experimental group
- LDL cholesterol concentration (same as above)
- Apoprotein B (again, to be expected given that LDL particles carry apo B)
- Body weight (this was, as you can see from both visual inspection and the p-value, the most significant change between the two groups)
- Though not shown in this table, the experimental group also reported less chest pain severity (though chest pain frequency and duration were not statistically different).
What about the angiographic differences? That is, how did the actual measured lesions in the subjects’ coronary arteries change?
Seven patients were excluded from this analysis: 1 patient in the control group (patient underwent an emergency angiogram in another hospital, but lesion sizes were not measured); and 6 patients in the experimental group (1 died while exercising in an “unsupervised gym,” 1 could not be tested at follow-up due to a large unpaid hospital bill, 1 patient dropped out, 1 patient’s pre-intervention angiogram was lost, and 2 patients did not have adequate overlay of pre- and post-images). To justify the findings of this trial we need to believe that the exclusion of these seven patients did not alter the conclusions, but we’ll never know. This disproportionate exclusion of 6 patients from the treatment group and only 1 patient in the control group, for (perhaps) the most interesting outcome, is (perhaps) the most significant methodological flaw of this trial.
Excluding these seven patients, the experimental group experienced an overall reduction in coronary artery stenosis (blockage) from a mean of 40% to 37.8%, while the control group experienced a progression in coronary artery stenosis from a mean of 42.7% to 46.1%, which was statistically significant. This trend also held for larger lesions (i.e., those starting out over 50%). Most importantly, in my mind, within the experimental group there was a strong correlation between adherence score and lesion regression. Translation: The more rigorously a patient was compliant with the lifestyle changes, the greater was the regression of their coronary artery lesions. This correlation is quite suggestive that the lifestyle change was responsible for the regression of coronary lesions.
I know what you’re thinking…Is there a point embedded somewhere in here? Yes.
Here is my point: This was a well-done trial from the standpoint of testing what it set out to test. It set out to test if a comprehensive lifestyle change could reduce markers of coronary artery (heart) disease, which it did. But that’s it. It did not tell us if a comprehensive lifestyle change reduced actual heart attacks, which it very well might have if there were hundreds of patients in the study. It is equally important to understand what we cannot conclude from this study. We cannot conclude which element of the lifestyle intervention led to the reduction in markers of heart disease. We know that in aggregate the lifestyle changes made a positive difference, but which ones actually caused the change and which were bystanders remains unknown.
Let’s take a leap of faith and hypothesize that the dietary intervention (rather than, say, the social support) had the greatest impact on the measured parameters in the subjects. It’s certainly the most likely factor in my mind. But what, exactly, can I conclude? Can I conclude that a low-fat vegetarian diet is the “best” diet for reducing the risk of heart disease? Nope. I can only conclude that a low-fat vegetarian diet is better than the average American diet consumed by the control group (if you are willing to stipulate that the dietary intervention was the most significant driver of outcome). Why? Because that’s what was tested. Unfortunately, this study (and hundreds like it) can shed no light on which specific aspect of the diet in the experimental group provided the advantage. Was it the reduction in fat intake? The reduction in animal protein? The reduction in sugar? The reduction in simple, highly refined carbohydrates? Unfortunately, we do not know.
SLIGHT DIGRESSION: Tragically, all of U.S. nutritional guidance and follow-from-it policies, recommendations, and food-based infrastructure were derived from this type of science. Maybe their conclusions are correct. Is fat bad for us? Are complex carbs the best thing we can eat? Though theoretically possible, there is no scientific evidence telling us this. In fact, there is ample evidence actually suggesting the opposite is true. Hence, this is why – exactly why – we are founding the Nutrition Science Initiative (NuSI) with a group of scientists who all agree that we need to actually test these hypotheses in the most rigorous manner possible, and only then make dietary recommendations.
How bad is it that nutritional recommendations are based on weak science?
Consider the following hyperbolic example: Imagine a clinical trial of patients with colon cancer. One group gets randomized to no treatment (we’ll call them the “control group”). The other group gets randomized to a cocktail of 14 different chemotherapy drugs, plus radiation, plus surgery, plus hypnosis treatments, plus daily massages, plus daily ice cream sandwiches, plus daily visits from kittens (we’ll call them the “treatment group”). A year later the treatment group has outlived the control group. Great news, to be sure. The treatment worked! Here’s the problem…we “conclude” it was the 7th and 9th drugs in the group of 14 drugs, plus the kittens that caused the treatment effect and we enact recommendations based on that. Are we right? Sure, it’s possible, but actually it’s quite unlikely. The only way to know for certain if a treatment works is to isolate it from all other variables and test it (in a randomized prospective fashion, of course). Do the kind of science we were taught to do in 8th grade.
So what do I think happened in Dr. Ornish’s study? I think the reduction in sugar and simple carbohydrates played the largest single role in the improvements experienced by the experimental group, but I can’t prove it from this study any more than one can prove a low-fat vegetarian diet is the “best” diet. We can only conclude that it’s better than eating Twinkies and potato chips which, admittedly, is a good thing to know.
Ok, back to the Consumer Reports “best diet” list I started this discussion around. Another point you’ll note in this table (up at the very top) is the overall amount of caloric restriction in each diet – an average of about 1,500 calories per day. The caveat here is that these numbers are self-reported, so everything needs to be taken with more than the proverbial grain of salt. I know what you’re thinking, “Hey, but you said calories don’t matter – why should it matter how many calories these folks are eating?” Remember, you can always “force” weight loss by creating energy imbalance. What I mean by that is you can force an energy imbalance if folks are willing to suffer (e.g., work really hard and/or starve). The reality TV show, The Biggest Loser, is a great example of this. Participants on the show are basically starved (under 1,000 calories per day) relative to their expenditure (6 hours per day of exercise at a cost of possibly as much as 4,000 calories per day). The question is, or at least should be, does this form of “dieting” result in long-term, sustainable weight loss? The overwhelming evidence is that calorie restriction (i.e., reducing calories significantly below active or deliberate caloric expenditure) results in transient weight loss, not sustained weight loss. Why? There are a few reasons, but I think the biggest two are:
- People don’t like to be hungry, and if they are reducing their caloric intake by reducing fat intake, they seldom find themselves satiated.
- Semi-starvation reduces basal metabolic rate, so your body actually adjusts to the “new” norm and slows down its rate of mobilizing internal fat stores.
Furthermore, most people can’t do six hours of heavy exercise a day (not to mention the world is full of people who do six hours of physical labor a day and are obese; I was fat doing 4 hours of exercise per day). The real tragedy of this is that when folks restrict calories and then resume, when they can’t tolerate the discomfort of relative starvation anymore, they usually end up gaining back all, if not more, of the weight they lost in the first place.
Not to beat a dead horse, but I’d be remiss if I didn’t make this point one more time: When someone reduces caloric intake to 1,500 calories per day – even on a “balanced” diet – they are considerably reducing carbohydrate intake in aggregate and almost always disproportionately with respect to the worst offending carbs (e.g., sugars, simple refined carbs).
Ultimately, the question we’re driving at is, why do these diets work? I argue that each of these diets does some good, especially with respect to eliminating the worst offending agents along the insulin-fat-metabolic derangement axes. The problem, unfortunately, is that the scientific community is completely confused as to why they work. Most people think the primary reason these diets work is that they reduce fat intake and total calories.
I argue that reduction of fat intake has nothing to do with it and that the reduction of total calories has a transient effect. And, the majority of the benefit folks receive comes from the reduction of sugars and highly refined carbohydrates. But now I’m repeating myself, aren’t I?