A new online article in JAMA Internal Medicine entitled "Association of Opioid Prescriptions From Dental Clinicians for US Adolescents and Young Adults With Subsequent Opioid Use and Abuse" claims that there is a clear association between opioid prescriptions and subsequent abuse for adolescents and young adults who had wisdom teeth removed.
On the surface, the message seems clear enough and maybe even intuitively obvious - that people who get opioid prescriptions following oral surgery are more likely to run into problems than those who do not. Even though the study is an epidemiological retrospective analysis - right near the bottom of the barrel in the universe of awful studies - the numbers do seem convincing. But the problem with retrospective analyses is that the numbers are highly dependent on which data are included in the study and which are omitted and these variables are controlled by the researchers. So, it is reasonable to ask if the numbers are real?
When it comes to numbers and studies you would have to search long and hard to come up with a better expert than Stan Young, Ph.D., a biostatistician and ACSH advisor. Dr. Young had to read only the abstract of the JAMA paper to come up with a long list of comments and questions that cast doubt on whether what we are seeing is real or this just another data dredging epidemiological study, most of which belong in the bottom of a birdcage. Here are Dr. Young's thoughts.
Objective To examine the association between index dental opioid prescriptions from dental clinicians for opioid-naive adolescents and young adults in 2015 and new persistent use and subsequent diagnoses of abuse in this population.
Young: Why just 2015? Did they look at other years?
Design, Setting, and Participants This retrospective cohort study examined outpatient opioid prescriptions for patients aged 16 to 25 years in the Optum Research Database in 2015. Prescriptions were linked by National Provider Identifier number to a clinician category.
Young: The authors call this a cohort study, but it is not. It is really a case-control study. Cohort studies are stronger than case-control studies. In a cohort study, you are boxed in a bit and can not pick and choose your groups. When you do a case-control the cases are fixed – they have the condition. But the controls can be selected in various ways, age and sex, the income of the parents, peer (pressure) and a host of other factors. Selection of the comparator group can change the answer.
Exposures Individuals were included in the index dental opioid (opioid-exposed) cohort if they filled an opioid prescription from a dental clinician in 2015, had continuous health plan coverage and no record of opioid prescriptions for 12 months before receiving the prescription, and had 12 months of health plan coverage after receiving the prescription. Two age- and sex-matched opioid-nonexposed control individuals were selected for each opioid-exposed individual and were assigned a corresponding phantom prescription date.
Young: “[H]ad continuous health plan” introduces another variable into the selection process, which can be another way to cherry pick the study population, which influences the results. For example, are opioid prescriptions from dental clinicians that are written for pain management of third molar extractions from adolescents and young adults associated with subsequent opioid use and abuse?
And why were adolescents and young adults included but not all patients? Did the authors scan for subgroups until they found one that gives a small p-value? If so, the entire study is probably meaningless.
In this cohort analysis of claims data, index opioid prescriptions in opioid-naive adolescents and young adults compared with age- and sex-matched controls were associated with a statistically significant 6.8% absolute risk increase in persistent opioid use and a 5.4% increase in the subsequent diagnosis of opioid abuse.
Young: Picking a comparison group can be a problem. Age and sex may not provide a similar comparator group. Perhaps a better comparator would be income and education.
The findings suggest that dental opioid prescriptions, which may be driven by third molar extractions in this age group, may be associated with subsequent opioid use and opioid abuse.
Young: The narrow patient entry/selection- third molar extractions - is puzzling. Did they look at lots of dental procedures where pain meds were prescribed, and cherry pick this one? Cherry picking is the hallmark of data dredging studies. And when you find two “may bes” in the findings sections this always raises a red flag.
Also, one must ask whether they generated their hypothesis (opioid use and abuse are coupled with third molar extractions) in advance. In other words, did they file a protocol before initiating the study or simply look through multiple sets of variables until they found two that matched up? The difference here is day and night.
The group used only one database to generate its conclusions. Did they test this claim in another, independent database? If not, why not?
It is important to note that when analyzing large databases it is possible to test lots of things and then make up a story that accounts for a small p-value to enable publication. This is referred to as P-hacking followed by HARKing. If this was done then you can pretty much throw out the entire conclusion.
Main Outcomes and Measures Receipt of an opioid prescription within 90 to 365 days, a health care encounter diagnosis associated with opioid abuse within 365 days, and all-cause mortality within 365 days of the index opioid or phantom prescription date.
Young: This study also uses a technique called "the combining of endpoints." To illustrate what this means let’s look at a different, but similar situation. Let's say I’m looking at deaths in California due to air pollution. I can look at all-cause, respiratory, cardiovascular. So, I have three chances to grab the brass ring. All of this begs the question, did they have a protocol? Was the protocol written BEFORE they looked at the data?
Results Among 754 002 individuals with continuous enrollment in 2015, 97 462 patients (12.9%) received 1 or more opioid prescriptions, of whom 29 791 (30.6%)
Young: This gives you the opportunity to pick your subgroup. Computers are fast. You can try this or that. You can manipulate the input data set as well as the analysis. Continuous enrollment vs anytime enrollment. Lots of choices need to be made. The choices sound reasonable. There is an opportunity to game the system.
Conclusions and Relevance The findings suggest that a substantial proportion of adolescents and young adults are exposed to opioids through dental clinicians. Use of these prescriptions may be associated with an increased risk of subsequent opioid use and abuse.
Young: Did they try this and that in the analysis? Did they try various data configurations? Did the explore in one year, 2015, and do a hard test in another year? NO. Did they devise their stated analysis plan AFTER looking at the data? There are too many questions, based on the abstract, to give this study much credence.