Junk Science in the Courtroom Keeps Coming Back – and Getting Swatted

There’s something about autism that invites scapegoating. The latest attack was on makers of Lexapro, the anti-depressant medication, when used during pregnancy. Six plaintiffs recruited three experts to testify to a supposed causal connection between the drug and their children’s affliction. The court rejected the expert testimony outright and dismissed the case. Three weeks ago, the Second Circuit affirmed. The decisions, while applaudable, are problematic.
Image by CQF-avocat from Pixabay

One wonders why plaintiffs keep going through this same rigamarole - proffering shady science which, of late, has been repeatedly rejected, costing money, increasing drug prices, and jeopardizing the availability of an important drug thousands of others really need. The answer: the judicial opinions don’t evidence a clear enough understanding of the scientific method and invite push-back from an aggressive bar.

Judges as Gate Keepers

Reminiscent of the Zantac cases, where the federal judge rejected all the plaintiffs’ experts’ testimony, the federal judge in Daniels-Feasel v. Forest Pharm excluded the plaintiffs’ experts from testifying. While not as scientifically sophisticated as the Zantac opinion, the Daniels-Feasel case rested on the same legal standards set forth in the Daubert case and Rule 702 of the Federal Rules of Evidence. And while the Second Circuit affirmance is of limited precedential value, it sends a message – at least to federal judges and those 43 states which have adopted Daubert –  that judges are charged as gatekeepers: They are required to bar evidence from even entering the courtroom if it is of questionable reliability and relevance.

The Daniels-Feasel experts were confounded by conflicting epidemiological evidence.  As in the Zantac case, they cherry-picked those favorable results, ignoring or side-stepping others. The courts made no bones that cherry-picked results are literally the kiss of death.  And while the experts had some half-baked methodology to support their approach, it wasn’t applied consistently enough to satisfy the court’s insistence that not only must data be reliable, but so must the methodology on which the expert opinion rests. The same flaws found in the Zantac case were repeated in Daniels-Feasel.

Plaintiffs’ attorneys are not stupid. And since they only get paid if they win, we can anticipate that they will “perfect” their method and address the federal courts’ objections. Either that or, if at all possible, they will bring their cases in those State courts where the evidentiary rules are more favorable- exactly as they did in the Zantac situation.

What can be done?

Firstly, a better understanding of the scientific method (not necessarily science per se) must be taught to our judiciary. The current courses, of which many exist, aren’t working.  Friends of the court, who provide legal assistance in the form of amicus briefs, might also benefit from such knowledge and undertake underwriting such courses. Pandering to cost-benefit or blabbered repetitions of words or concepts which are outdated in the assessment of sound science, such as “falsifiability,” doesn’t help anyone, even if meant well.

For example, the Daubert case pivots on the “reliability” of the data and the scientific method without any clear understanding of what “reliable” means in science.  It means “repeatable” and would be properly applied to data. Assurance of repeatability of results comes from statistical measures, such as p-values and confidence intervals. Relaxing those standards, as the California state court did in its Zantac case, impugns the scientific soundness of the proffered evidence and makes a mockery of the requirement for introducing only competent evidence.

Secondly, reliability does not pertain to methodology, although this mischaracterization is often used to reject shoddy evidence. To assess the soundness of the method (which indeed is required per the Joiner case), we need to look at its validity, a phrase never properly used in legal mantra or lore.  We need to ask whether the method or study design was developed to prove or disprove the articulated hypothesis.  Indeed, is the hypothesis underpinning a causal claim clearly articulated in the studies relied on?

Thus the claim that SSRIs, the class of pharmacologic drugs to which Lexapro belongs, caused all the various cases of autistic spectrum disorder (ASD) suffered by the claimant’s children, which was not the articulated hypothesis used in studies relied on by the experts.  This hypothesis may not even be testable, as the case definition is too loose to be adequately or reliably measured for testing purposes of causation, and the number of children in each category may be too small to yield meaningful data.

“The findings of this meta-analysis and narrative review support an increased risk of ASD in children of mothers exposed to SSRIs during pregnancy; however, the causality remains to be confirmed.”

Indeed, research that did note an association between SSRIs and ASD were observational studies, useful to determine correlations for the purpose of assessing medical care but not designed to determine causation. These studies explicitly contained limitations cautioning against making such leaps of logic.

Thirdly, the idiosyncratic weighting of epidemiological studies by some gestalt matrix of some or all of the nine “Bradford Hill” criteria [1], whether consistently applied or not, does not establish any causal hypothesis, no matter what courts or experts claim. That only some Bradford Hill tests were invoked in the Daniels-Feasel case, invoked by the court in rejecting the expert testimony, in and of itself is not fatal.

However, a key concern would be to exclude confounders or other possible causes. Careful histories of pregnant women prescribed anti-depressants are crucial before an epidemiological study can be considered unbiased. Indeed, recent evidence indicates that peri-pregnancy use of cannabinoids may also be responsible for autism, along with genetic factors, and depression itself.

Finally, valid epidemiological studies generating statistically reliable results might provide some sound indication of causality, assuming the relevant Bradford Hill factors are met. I suggest that only studies where the lower end of the confidence interval is above 1.5, evidencing a more than 50% increased risk of disease, reach the level of certainty equivalent to the “more probable than not” standard required in civil law. To date, no legal doctrine proposes that this be the applicable standard. However, should this criterion be adopted, many of the loosey-goosey experts and sham cases would surely evaporate.

Bad results invite bad law – and bad social policy.

Attention to the fact that bad results invite a scapegoat, with perhaps autism crying the loudest and causing the most significant anti-social behaviors (i.e., anti-vaxxism), might sensitize the public (and the courts) to look warily on these cases.My friend, a medical doctor, has a lovely adult son. Good-looking, sociable, happy, friendly, outgoing. We’ve had many a delightful conversation initiated by the young man. My friend believes her son is autistic. Hardly. He is developmentally disabled. But autism is a much less socially stigmatized diagnosis. From there, she decided, ten years after her son’s birth, that his “autism” was caused by his measles shot.  She still so believes, and it fuels her passionate anti-vaxxism.  It’s not a far cry to believe that anti-depressants ingested during pregnancy might be another cause.

 

[1] Sir Austin Bradford Hill crafted nine parameters by which epidemiological studies are interpreted: strength, consistency, specificity, temporality, biological gradient, plausibility, coherence, experiment, and analogy.  Bradford Hill was not an epidemiologist or statistician. He was an economist.

Category
ACSH relies on donors like you. If you enjoy our work, please contribute.

Make your tax-deductible gift today!

 

 

Popular articles