Vitamin D(éjà vu): new study, same old problems

For vitamin D supplementation to have any effect relative to placebo, it needs to be increasing the body’s supply of vitamin D, and if it doesn’t, then the treatment and placebo groups are effectively identical. So did the researchers achieve a difference in vitamin D levels over the course of the study?

Read Time 5 minutes

Ever have that feeling of déjà vu? Like you’ve seen something before, even when you know it isn’t possible? If so, then you’ll understand the surreal disbelief I was experiencing as I read an article in the New England Journal of Medicine on the effect of vitamin D supplementation on fracture risk. The study had only just been published, so I couldn’t have seen it before… or could I?

Rest assured, this was no glitch in the Matrix. My déjà vu had a far less dramatic (though arguably more frustrating) explanation: new study, same old problems.

The Flashback

Two years ago, I wrote a newsletter calling attention to another study – this one published in JAMA – involving vitamin D supplementation. Using data from a 5-year, placebo-controlled, double-blinded, randomized clinical trial (RCT) of over 18,000 participants (the “VITAL” trial), the investigators had concluded that supplementation with vitamin D did not have a significant effect on mood or depressive symptoms. This study design certainly seems robust, yet as I discussed in my newsletter, the devil is often in the details, and the conclusions from even a “good” RCTs can crumble under closer scrutiny.

The Rerun

Flash forward to the NEJM study published two weeks ago. Like the JAMA authors, the investigators behind the NEJM study utilized data from the VITAL trial, this time tracking the incidence of bone fractures among nearly 26,000 participants over the 5-year follow-up period. So once again, we have a large, placebo-controlled, double-blinded RCT, and once again, I find myself asking many familiar questions before putting any faith in the authors’ conclusions. For most of these, I’ll spare you the repetition and refer you instead back to my original discussion, but a few points bear revisiting.

Did they reach their target endpoint (i.e., what were the follow-up vitamin D levels)?  

At its surface, this study aimed to assess whether vitamin D supplementation reduced risk of incident fractures, but the underlying question, then, is whether vitamin D levels in the body are inversely associated with fracture risk. For vitamin D supplementation to have any effect relative to placebo, it needs to be increasing the body’s supply of vitamin D, and if it doesn’t, then the treatment and placebo groups are effectively identical.

So did the researchers achieve a difference in vitamin D levels over the course of the study? We have no idea. Vitamin D was measured in fewer than 25% of participants at follow-up, and only a fraction of those participants had also provided baseline samples for comparison (~10% of the total study population). Data from this small subset indicate that the treatment group experienced an increase in vitamin D levels (from 29.2 ng/mL at baseline to 41.2 ng/mL at the 2-year time point) while the placebo group did not, but how sure can we be that the trend generalizes to the entire cohort? The problem is exacerbated by using bone fractures – relatively rare, all-or-none events – as the outcome of interest. Only ~2,600 of the 25,871 participants provided baseline and follow-up blood samples for analysis, and only 1,551 experienced fractures during the course of the study. It’s conceivable that these two populations had no overlap at all, and while such an extreme would be statistically unlikely, we nevertheless can’t know the extent to which data from one of these groups apply to the other.

How were the intervention (adherence to supplementation protocol) and outcomes (bone fractures) monitored?

Without direct measurement of vitamin D levels and their changes over time, the investigators relied instead on self-reporting questionnaires to assess supplementation adherence, defined as taking at least two-thirds of the assigned pills. The authors report adherence of 85.4% at 5 years of follow-up – a respectable level as far as clinical trials are concerned. But bear in mind that the reliability of this value depends on participants accurately recalling and reporting their study activity for an entire year in annual surveys, a notoriously flawed methodology.

Likewise, the outcome of interest was also measured by self-report, with participants reporting fractures in annual questionnaires. Self-reports of fractures are far less likely to be associated with significant memory errors than self-reports of vitamin intake. (After all, which is harder: remembering how many days you forgot your vitamins last fall, or remembering whether you broke a hip?) Further, the authors did verify most fracture reports against medical records, but it’s still worth reiterating that such quantitative or objective methods are virtually always preferable to participant surveys, and RCTs can be just as guilty of unreliable methodology as less rigorous study designs.

What were the baseline characteristics?

Like the JAMA study, the NEJM study did not limit their analysis to those with low baseline vitamin D levels. Individuals are generally considered deficient in vitamin D if circulating levels are below 20 ng/mL, and only about 13% of study participants met this definition at the start of the study (at least among participants who provided baseline blood samples – roughly 60% of the full cohort).

The problem here is that supplementation to overcome a deficiency is likely to have more dramatic or even qualitatively different effects from supplementation to elevate already-healthy vitamin D levels. To address this issue, the authors stratified participants into quartiles based on baseline vitamin D levels, and their results demonstrated no significant differences in fracture risk between quartiles. But it bears noting that the data did show a trend toward highest risk among the lowest quartile (vitamin D ≤24 ng/mL) and lowest risk among the highest quartile (vitamin D ≥37 ng/mL). With only about 4000 participants in each quartile (corresponding to 100-200 fractures per group), it’s possible that the study was simply underpowered, and that the trend would have reached significance with larger sample sizes.

What was the dose of the intervention?

The vitamin D group was given a dose of 2000 IU/day. To reiterate one of my critiques from the JAMA study, this dose is fairly low. (Most over-the-counter D3 supplements for adults typically range from about 500 to 10,000 IU.) With pre- and post-treatment comparison from only 10% of participants, we just can’t be sure that this dose was sufficient to raise vitamin D to a clinically meaningful level above the placebo group. Parenthetically, in our practice we do not supplement with a dose of 2000 IU. Either we don’t supplement at all (if we levels are adequate) or we supplement with 5000 IU if supplementation is warranted. 

To make matters worse, a sizable subset of participants took vitamin D supplements outside of the study directives, especially among the placebo group. By the 5-year follow-up point, 10.8% of participants in the placebo group reported taking vitamin D supplements of at least 800 IU (vs. 6.4% in the vitamin D group). Again, these numbers are based on self-reporting and are subject to an unknown level of error, but even taking them at face value, it’s obvious that this potentially undermines results by blurring the distinction between interventions for the two groups.

Déjà vu all over again

Using the same pool of data as the JAMA study, the NEJM study was bound to share many of the same weaknesses, and practices for analyzing these data certainly didn’t make up for the deficiencies. So while it’s tempting to see a large, hot-off-the-press randomized controlled trial and get wrapped up in the excitement of something new, in this case, it’s just the same old analysis of vitamin D(éjà vu).

– Kathryn Birkenbach and Peter Attia

 

For a list of all previous weekly emails, click here

Disclaimer: This blog is for general informational purposes only and does not constitute the practice of medicine, nursing or other professional health care services, including the giving of medical advice, and no doctor/patient relationship is formed. The use of information on this blog or materials linked from this blog is at the user's own risk. The content of this blog is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Users should not disregard, or delay in obtaining, medical advice for any medical condition they may have, and should seek the assistance of their health care professionals for any such conditions.
Facebook icon Twitter icon Instagram icon Pinterest icon Google+ icon YouTube icon LinkedIn icon Contact icon