January 4, 2021

Understanding science

#143 – John Ioannidis, M.D., D.Sc.: Why most biomedical research is flawed, and how to improve it

“We need to defend our method. We need to defend our principles. We need to defend the honesty of science in trying to communicate it rather than building exaggerated promises or narratives that are not realistic.” —John Ioannidis

Read Time 34 minutes

John Ioannidis is a physician, scientist, writer, and a Stanford University professor who studies scientific research itself, a process known as meta-research. In this episode, John discusses his staggering finding that the majority of published research is actually incorrect. Using nutritional epidemiology as the poster child for irreproducible findings, John describes at length the factors that play into these false positive results and offers numerous insights into how science can course correct.  


We discuss:

  • John’s background, and the synergy of mathematics, science, and medicine (2:40);
  • Why most published research findings are false (10:00);
  • The bending of data to reach ‘statistical significance’, and the how bias impacts results (19:30);
  • The problem of power: How over- and under-powered studies lead to false positives (26:00);
  • Contrasting nutritional epidemiology with genetics research (31:00);
  • How to improve nutritional epidemiology and get more answers on efficacy (38:45);
  • How pre-existing beliefs impact science (52:30);
  • The antidote to questionable research practices infected with bias and bad incentive structures (1:03:45);
  • The different roles of public, private, and philanthropic sectors in funding high-risk research that asks the important questions (1:12:00);
  • Case studies demonstrating the challenge of epidemiology and how even the best studies can have major flaws (1:21:30);
  • Results of John’s study looking at the seroprevalence of SARS-CoV-2, and the resulting vitriol revealing the challenge of doing science in a hyper-politicized environment (1:31:00);
  • John’s excitement about the future (1:47:45); and
  • More.


John’s background, and the synergy of mathematics, science, and medicine [2:40]


  • John considers himself a “scientist in the works”
  • I’m trying to be a scientist. I think that this is not an easy job. It means that you need to reinvent yourself all the time. You need to search for new frontiers, for new questions, for new ways to correct errors and to correct your previous self, in some way.”
  • Background in mathematics and he bring mathematics to the study of science
  • Born in New York City, but I grew up in Athens
  • Both of his parents were physician scientists — he heard their stories of clinical exposure but also saw them working on their research

How did you decide to also pursue something in the biological sciences in parallel, as opposed to staying purely in the natural or philosophical sciences of mathematics?

  • Medicine had the attraction of being a profession where you can save lives: “The ability to make a difference for human beings and to save lives, to improve their quality of life, seem to be something that was worthwhile pursuing.”
  • But mathematics is critical: “mathematics are the foundation of so many things and they can really transform our approach to questions that, without mathematics, it would be very difficult to make much progress.”

How mathematics, science, and medicine synergize

  • John sees mathematics and science and medicine as complementary of each other
  • In fact to do something very meaningful, you need all three otherwise you risk “losing the whole”
    • Medicine is amazing in terms of its possibilities to help people
    • However, the scientific method must be applied if you want to get reliable evidence (see Richard Feynman’s famous explanation of the scientific method)
    • You also need quantitative approaches (i.e., mathematics)

“I think that none of them is possible to dispense without really losing the whole, and losing the opportunity to do something that really matters eventually.” —John Ioannidis

John’s studies:

  • John won the highest honor that a graduating college student could win in mathematics in Greece at the time
  • John finished medical school in Athens at the National University of Athens
  • He went to Harvard for residency training 
  • Then to Tufts Medical Center for training in infectious diseases
  • At the same time, he was also doing joint training in healthcare research

People that shaped John’s thinking:

  • Professor of Epidemiology at Harvard, Dimitrios Trichopoulos 
  • In residency training, a great physician scientist in infectious diseases, Bob Moellering, who was the physician-in-chief and a professor of medical research at Harvard— “an amazing personality in terms of his clinical acumen and his approach to patients” 
  • At the end of residency training (1992), John had a “revelation” after meeting Tom Chalmers and Joseph Lau at Tufts who were the ones advancing the frontiers of evidence-based medicine

“It was a revelation for me because somehow what they were proposing was mixing mathematics’ rigorous methods, evidence, and medicine in one coherent whole. . . [until that point] I was just seeing lots of clinical exposures where there was very little evidence to guide us. There was no data, or very poor data, and a lot of expert based opinion guiding everything that was being done.” —John Ioannidis


Why most published research findings are false [10:00]

John came onto Peter’s radar around 2005 when John published this paper: Why Most Published Research Findings Are False

  • The paper suggested that more than half of scientific papers claiming to have found a statistically significant signal were actually just a “false positive”
  • Peter was blown away by this paper, but he was “primed” to believe it because his mentor in post-doc training had once told him that ~70% of published papers were never cited again (outside of auto citation) suggesting that most papers are either irrelevant or wrong

John describing his 2005 paper

  • The paper used a mathematical model to match empirical data that had accumulated over time to understand the validity of different pieces of research that was being produced
  • The impetus for this paper was that many scientists had been disillusioned that when a evidence-based medicine started that we now had a tool to be able to get very reliable evidence for decision making
  • But they quickly realized that the vast majority of the results were unreliable and could not be replicated— “It was the rule that we had either unreliable evidence or, perhaps even more commonly, no evidence.”
  • The mathematical construct would try to: i)  explain what is going on, and ii) predict what might happen if some of the circumstances would change in terms of how we do research
  • The first question being explored is the chances of a statistically significant result is indeed a “non-null” effect (or is it just a “red herring”?)

In order to calculate this, you need to take into account:

{end of show notes preview}

Would you like access to extensive show notes and references for this podcast (and more)?

Check out this post to see an example of what the substantial show notes look like. Become a member today to get access.

Become a Member

John Ioannidis, M.D., D.Sc.

John Ioannidis is a physician-scientist, writer and Stanford University professor who has made contributions to evidence-based medicine, epidemiology, and clinical research. Ioannidis studies scientific research itself, meta-research primarily in clinical medicine and the social sciences. He’s one of the world’s foremost experts on the credibility of medical research. He’s the Co-director of the Meta-research Innovation Center at Stanford.

Ioannidis’ paper on “Why Most Published Research Findings are False” has been the most-accessed article in the history of Public Library of Science (over 3 million views in 2020).

John P.A. Ioannidis, Chair in Disease Prevention at Stanford University and is Professor of Medicine, at Stanford University School of Medicine, on Monday, April 21, 2014. ( Norbert von der Groeben/ Stanford School of Medicine )

Disclaimer: This blog is for general informational purposes only and does not constitute the practice of medicine, nursing or other professional health care services, including the giving of medical advice, and no doctor/patient relationship is formed. The use of information on this blog or materials linked from this blog is at the user's own risk. The content of this blog is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Users should not disregard, or delay in obtaining, medical advice for any medical condition they may have, and should seek the assistance of their health care professionals for any such conditions.
  1. Professor Ioannidis took a lot of heat and still does to this day for his views on the handling of the pandemic. Before that he was known as the science, study slayer because he called out lots of bad science. He has very high standards and is as honest as they come … so of course he has to be shut down immediately and categorized as a rogue nutcase. You’re not allowed be honest when it comes to Corona.

    • Thank you for finally interviewing John, Peter! His work was foundational to my budding skepticism in epidemiological research, particularly in nutrition and he has not done so many podcasts so this episode is really a treasure!

      I followed the work and criticisms of Ioannidis’ publications on COVID closely. I have tremendous respect for him and I think he was both correct (looking at the data) and wrong (public policy implications). I should note that I am not an epidemiologist and I think his appeal to the greater authority (i.e. epidemiologists are the ones we should be listening to) is right in general, but these folks are not infallible. I am a founding member at endcoronavirus.org – bias disclosed.

      I think Harry Crane summarized the opposing position well in his paper “A fiasco in the making: More data is not the answer to the coronavirus pandemic” https://researchers.one/articles/20.03.00004

      One of Ioannidis’ most vocal critics (quasi-libertarian leaning, though I believe the foundation is scientific-apolitical) was Nassim Taleb, who basically argues in favor of risk mitigation/survival when uncertainty is large. There is a fantastic debate between the two camps on forecasters.org


      I think this quotation gets at it: “For matters of survival, particularly when systemic, and in the presence of multiplicative processes (like a pandemic), we require ‘evidence of no harm’ rather than ‘evidence of harm.'” Ioannidis was not alone in his position, but I think an important reason he was singled out is precisely because he is such an influential figure: people in public policy reference his material and treat it like gold.

      Of note, on page 32 of his final response in the forecasters.org debate Ioannidis admirably admits he was wrong w/r to his April 9 quote on CNN “If I were to make an informed estimate based on the limited testing data we have, I would say that COVID-19 will result in fewer than 40,000 deaths this season in the USA.” I am glad John survived the onslaught of negative opinion so that he can continue to do research and I am very optimistic about his ability to turn past mistakes into progress and beneficial public communications.

  2. Professor Ioannidis has committed the same bias he so strenuously sought to prevent in other researchers. He is honest, he could be more honest by apologizing for allowing his scientific approach to data to be used by those who minimized the effect of coronavirus on the US population. My mother is one of the 350k so far who have died despite staying at home 100% of the time. My father infected her after going to church (3 times during the entire epidemic). He appropriately took heat for his response, and in my mind, has not been implicated enough in the goal to minimize the impact this virus has had on US culture and economic and psychological future.

  3. Peter, I really think that Ioannidis is not correct in his self-reflection on the Santa Clara study. That 50X number is a *point estimate* with extremely *wide* uncertainty – when the data are analyzed correctly. So wide they were in fact consistent with *zero* prevalence! Don’t take my word for it. I’m just a blog commenter from a different scientific field. Instead, here’s arch-statistician Andrew Gelman:

    First the blog where he analyzed the study preliminarily, and many of us discussed –

    Now the peer-reviewed paper forthcoming in Royal Statistical Society:

    In short, the Ben-David study was a statistical train-wreck, and it was irresponsible for John or anyone else to go around to policy-makers and media hyping those results!

    There is no doubt JP Ioannidis is a great scientist and methodologist. I’ll be happy to have 1/100 the impact he has had! But he whiffed this one in some important ways. These are not mutually exclusive statements about the world!

  4. Loved it, especially how it wrapped up. How do we cultivate a new social norm of it being not only okay, but celebrated when we see errors in our thinking and revise what we think we know accordingly?

    Examination of diamond cruise ship data early on was what guided me towards skepticism of the 5-10% IFR claims. Of course, I’m biased towards skepticism in general.

Leave a Reply

Facebook icon Twitter icon Instagram icon Pinterest icon Google+ icon YouTube icon LinkedIn icon Contact icon