Even more Random Acts of Medicine
We share some more of our writing published in other outlets.
This week, we’ll be taking a short summer break from our usual posts—we’ll be back next week with more Random Acts of Medicine content. But if you need some tiding over, below are some links to our writing in other outlets.
“The Science of What We Eat Is Failing Us”
In this guest essay at The New York Times, we explore why most of the research into what we eat leaves us hungry for better studies that can do more to inform the decisions we make at the dinner table.
“The ‘successful failures’ of Apollo 13 and Covid-19 vaccination”
In this essay for Stat published on the anniversary of the Apollo 13 mission, we examine how the phrase “successful failure”—coined by astronaut Jim Lovell of his failed mission to the moon—can be applied to public health during the COVID-19 pandemic.
”Recent Improvements in Data From the Gun Violence Archive—Will They Lead to Change?”
In this invited commentary published in a recent issue of JAMA Network Open in response to a study of data in the Gun Violence Archive, we discuss ways in which we believe better data on gun violence might—and might not—lead to meaningful public health benefits. (Forgive the jargon in the second paragraph; the gist is that data in the Gun Violence Archive is pretty good compared to the alternatives).
Random Acts of Medicine
If you’ve been enjoying our Substack newsletter, then you’ll certainly enjoy our book, on sale here or wherever you buy your books!
Thanks for your response, Chris. I think we are perfectly aligned. The problem is that many will view different research methods as "good" vs. "bad". RCTs have (appropriately) held an esteemed place in the hierarchy of evidence but I think this has also led to an underappreciation of it's limitations and misunderstandings on how observational approaches (including natural or quasi experimental methods) can valuably contribute to increasing our knowledge base.
I read your NY Times Op-Ed piece with interest. Although I generally agree with your point on the value of natural experiments to potentially increase the internal validity of population-based research studies, your piece gives the impression that observational research methods are uniformly problematic. I think it is important to highlight that the validity of causal inferences from different study designs are on a spectrum. Even randomized controlled trials are subject to selection, information, and confounding biases, especially over follow-up. There are numerous examples where observational studies have provided important and valid insights on intervention/exposure/treatment effects, especially when methodologically framed as a target trial emulation (e.g., COVID-19 vaccine studies). I would also emphasize that the use of natural experiments in populations is a form of observational study, assuming the latter refers to situations where the intervention/exposure/treatment was not manipulated for the purpose of answering a research question. These issues speak to the importance of understanding the principles that underlie making causal inferences from data and appropriately deploying the tools at our disposal to make these inferences as valid as possible.