3 Comments
Sep 5, 2023Liked by Chris Worsham

Thanks for your response, Chris. I think we are perfectly aligned. The problem is that many will view different research methods as "good" vs. "bad". RCTs have (appropriately) held an esteemed place in the hierarchy of evidence but I think this has also led to an underappreciation of it's limitations and misunderstandings on how observational approaches (including natural or quasi experimental methods) can valuably contribute to increasing our knowledge base.

Expand full comment
Sep 5, 2023Liked by Chris Worsham

I read your NY Times Op-Ed piece with interest. Although I generally agree with your point on the value of natural experiments to potentially increase the internal validity of population-based research studies, your piece gives the impression that observational research methods are uniformly problematic. I think it is important to highlight that the validity of causal inferences from different study designs are on a spectrum. Even randomized controlled trials are subject to selection, information, and confounding biases, especially over follow-up. There are numerous examples where observational studies have provided important and valid insights on intervention/exposure/treatment effects, especially when methodologically framed as a target trial emulation (e.g., COVID-19 vaccine studies). I would also emphasize that the use of natural experiments in populations is a form of observational study, assuming the latter refers to situations where the intervention/exposure/treatment was not manipulated for the purpose of answering a research question. These issues speak to the importance of understanding the principles that underlie making causal inferences from data and appropriately deploying the tools at our disposal to make these inferences as valid as possible.

Expand full comment
author

Thanks for the comments, Joe--I agree with everything you said! I certainly don't think observational research methods are uniformly problematic (and as you said, the use of natural experiments is observational research, as is most of the research we do, and there are a host of other methods that can help reduce bias that go beyond the scope of that article). For example, the body of observational studies make it clear that smoking causes cancer, lung disease, heart disease, etc., yet no randomized trial has been done to show this. We know it's not good to drink 10 alcoholic drinks per day or get our hydration from soda from research that is limited, but useful enough to suggest causal relationships even if the estimates of the effects on various outcomes aren't precise. And in the absence of higher quality research, we have to base our decisions on something. But I do think that the vast majority of published studies out there that are looking at less extreme questions--such as whether 0 drinks per day is better than 1, or whether substituting artificial sweeteners for some dietary sugars is helpful or harmful--have substantial limitations that make them, for the most part, inactionable. As you said, deploying the best tools where we can will hopefully lead us to better research that can let us make more informed decisions at the grocery store and the dinner table (and on that midnight trip to the fridge).

Expand full comment