Evidence-Based Hearsay: Clinical Medicine's Fake News

Related articles

John Ioannidis, a professor at Stanford, has made something of a career writing about the quality of scientific reporting. His paper "Why Most Published Research Findings Are False" is among the most downloaded articles from PLoS Medicine. He has written a new essay in this month’s Social Science and Medicine. He begins with a review of a study in the same issue which demonstrated that unsolicited online reviews of medications were biased. Biased in that a drug’s effectiveness, in this case, Benecol, CholestOff, and Orlistat,

“seemed more impressive in the online reports….The difference between reported effectiveness and clinically-proven effectiveness was substantial for all three treatments, and it would change their perceptions from medications of no or small benefit of dubious clinical value to ones with clear clinical indications.”

And biased because “people with good treatment outcomes are more inclined to write online medical product reviews.”

But having said that Ioannidis, next turns towards his real quarry, the use of epidemiology (observational methods) in making causal claims. Randomized clinical trials (RCT) are expensive and time-consuming while cohort studies (one of the two most popular observational methods), which look at a population over time and check environmental factors versus disease, are more convenient. This convenience, and the hope that various statistical means will ‘reduce bias’ coupled with a profound Darwinian pressure to publish, favor a strategy where observation replaces RCTs. 

There are good reasons that, when properly done, this can help produce quality work. Electronic records and a wealth of datasets from government and other sources are valuable in looking for observational effects.

But in my experience using many large datasets, it is frequently the case that more time is spent cleaning up and removing unreliable data than the analysis itself.  Furthermore, multiple studies have shown how electronic medical records, designed for billing documentation, are plagued with errors as providers just copy and paste one visit upon another.

For Ioannidis, the embrace of these methods by all stakeholders - academia, private sector scientists, government and professional societies - may result in “a massive suicide of [clinical medicines] scientific evidence basis.”

I have lived in both worlds. The early years of my career were spent as an academic vascular surgeon but the bulk of my professional life was as a community physician. I dutifully read journals every month but I was also educated by peers, and dare I say, sometimes a sales representative for a new medication or surgical device. Ioannidis’s concerns are further upstream, where publication and approval decisions are made. His concerns are more than just valid, and they may represent the canary’s silence in medicine's "mine of clinical information."

Guess what? Downstream, outside academia where many of us work, hearsay has informed evidence for a long time.

Consider the seminal work of Everett Rogers who pioneered the study of information sharing and dissemination in 1962 with his book The Diffusion of Innovation. Information cascades downstream by social means, from one individual to another. Medical journals are the earliest, initiating trickle of information that changes our beliefs, and they have an important role. But the diffusion of information, well, that is a social process. I have always been uncomfortable with that term ‘evidence-based’ because, after all, the evidence keeps changing. I prefer evidence-informed because it accounts for all those social processes that lead me to consider a change.

Ioannidis ends with this,

“Understanding the science of hearsay may be useful to help us salvage evidence-based medicine.”

He is right. I suspect that much of that understanding already exists, it has simply cascaded through different social networks than the ones inhabited by physicians and physician-scientists.