When A.I. Fails Patients, Who's Accountable? A Healthcare Primer.

By Chuck Dinerstein, MD, MBA — Mar 27, 2024
As developers and health systems embrace artificial intelligence-powered software, a pressing question emerges: Who bears the burden when these innovations inadvertently harm patients? And especially when legal precedent offers only faint guidance. Let's take a look.
Original Image by Gerd Altmann from Pixabay

“When AI contributes to patient injury, who will be held responsible?”

Great question. In the headlong rush by developers and health systems to embrace software systems using AI, little is known about how product liability and medical malpractice case law will be interpreted. A review in the New England Journal of Medicine (NEJM) lays out some legal issues and the “somewhat faint” signals emerging from case law.

A Legal Primer

There are three possible defendants in these cases: the product developer, the physician users, and, with increasing frequency, the physician’s employer or health system providing the AI healthware. [1] While each of these actors have different “standards of care” before the law, in litigation, plaintiffs must show that the defendants owed them a duty, breached the applicable standard of care, and that the breach caused their injury.

Generally, FDA-approved medical devices are protected by the doctrine of “pre-exemption,” preventing personal injury claims. In the case of AI-augmented devices, it is unclear whether the exemption applies to the device in its totality or to its respective hardware or software components.

Because software is intangible, especially the output of AI programs, “courts have hesitated in applying product liability doctrines.” In many instances, plaintiffs “must demonstrate the existence of a reasonable alternative design for safety.” Given the underlying nature of AI programs, which are statistical and essentially “black box,” it is difficult to demonstrate that “incorrect output resulted from the alleged defect.”

Because many of these tools are quite properly overseen by humans, plaintiffs must demonstrate that the physician's use or departure from a recommendation or computer-driven action was unreasonable. When the tools are integrated into hospital care or electronic medical records, another issue arises regarding whether the integration has facilitated or hindered physician judgment.

A Tort Review

The authors went on to review tort cases involving software, including AI applications, in and out of healthcare. In reviewing 51 cases with a “human in the loop,” they identified three clusters of liability concerns.

Patients harmed by software defects in care management systems may bring product liability claims against developers and sue hospitals for their involvement in system selection, maintenance, or updates. “In Lowe v. Cerner, the court held that the plaintiffs had made a viable claim that a defective user interface in drug-management software led physicians to mistakenly believe that they had scheduled medication.”

Physicians relying on erroneous software recommendations may face malpractice claims from patients who argue that the physician should have independently verified or disregarded the recommendation. As the authors argue,

“that the varying performance of AI for different patient groups will force courts to grapple with determining when a physician reasonably should have known that the output was not reliable for particular patients.”

AI systems are “brittle” and often fail to perform as well on real-world data as their training sets. In addition to knowing these population-based distinctions, physicians will have to understand the model's underlying algorithm, its sensitivities, and specificities, which in many cases are unknown and proprietarily held by the software provider. Courts, like physicians, will grapple with determining when physicians should have recognized the unreliability of software outputs. The current standard of “how a reasonable physician would proceed” may apply. As I have mentioned in the past, when a human is given oversight, it is rarely, if ever, the case that the software is found at fault.

Malfunctions of software within medical devices prompt claims against physicians, hospitals, and developers for negligent use, installation, or maintenance. Leaving aside a preemption defense, the authors noted that

“plaintiffs struggle to sustain claims when diminished visibility into the workings of the device makes identifying a specific design defect difficult. The complexity and opacity of AI lead to similar issues.”

Finally, as my colleague Dr. Billauer has written on many occasions regarding evidentiary science, the Courts are frequently ill-equipped to understand and navigate scientific thinking. The Courts are not distinguishing AI from more traditional software, making for assumptions about transparency and explainability that AI systems do not share with more traditional algorithms.

What to do

“AI is not one technology but a heterogeneous group with varying liability risks.”

Without regulation [2], policy is enacted through litigation in the Courts. The authors offered four risk factors to consider in AI healthware including identifying reasonable expectations regarding the likelihood and nature of errors based on the model, its training data, and task design. Because we are talking about systems with human oversight, it is worthwhile to consider “the opportunity for catching errors,” which is often referred to in the medical literature as "rescue." Factors that can influence catching these errors include “how much buffer or time exists between failure of the tool and harm to the patient” and, more importantly, the concept of situation awareness. How vigilant a physician may be in assessing a decision in the context of time and informational constraints. This has been an ongoing source of medical error as the chance for situational awareness may diminish with every shift change.

Individuals using decision support models often exhibit automation bias – the tendency of individuals to rely too heavily on automated systems or computer-generated output, often at the expense of their own judgment or critical thinking.

“Can busy physicians, be counted on to thoughtfully edit large language model–generated draft replies to patients’ emails, investigate whether model-recommended drugs are indeed appropriate for a given patient, or catch errors in visit notes produced by speech-to-text models?”

Anyone who has seen how physicians cut and paste prior history and physical examinations into the current examinations knows the answer is NO! Relying on physicians who may be overworked or tired may not “provide meaningful catch opportunities” but will be a source of litigation. As the authors write, “patients with serious injuries being more likely to seek legal representation,” and parenthetically lawyers taking up those more remunerative cases.

Healthcare systems that embrace AI tools with significant risk, with “low opportunity to catch the error, high potential for patient harm, and unrealistic assumptions about clinician behavior,” will need to provide more comprehensive monitoring. Where will the funding for this monitoring come from, and will administrators, in their haste to get the latest shiny object measure those costs against the “savings?”

These systems will also have new evidentiary problems. In addition to the maintenance and updating of AI systems, the owners will have to document the model's inputs, outputs, and version along with their clinicians' thinking in accepting or rejecting AI’s recommendations in order to have any defense in court. How will they maintain copies of legacy software? Who amongst us has a copy of IOS 11 (the iPhone operating system) released in September of 2017? We are currently in IOS17

Integrating AI into healthcare presents many legal challenges, from product liability claims against developers to malpractice suits against physicians. Understanding the nuances of AI-related liability is crucial, especially as courts grapple with the complexities of these technologies. As healthcare organizations adopt AI tools, they must carefully assess the risks and benefits while ensuring comprehensive monitoring and documentation to navigate potential legal pitfalls.

[1] I am using the term healthware to include both physical devices like a pulse oximeter, infusion pump, or surgical robot as well as the devices as well as stand-alone software, i.e., tools to summarize clinical encounters, to determine the need for prior authorization or intrinsic software to alert, warn or advise physicians or other care providers.

[2] In 2023, Congress passed 27 bills, including naming “some Veterans Affairs clinics [and] commissioned a commemorative coin for the 250th anniversary of the Marine Corps.”

 

Sources: Understanding Liability Risk from Using Health Care Artificial Intelligence Tools NEJM DOI: 10.1056/NEJMhle2308901

Stanford Center for Human-Centered Artificial Intelligence Understanding Liability Risk from Using Health Care Artificial Intelligence Tools

YouTube Understanding Liability Risk from Using Health Care Artificial Intelligence Tools

Category

Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author:
ACSH relies on donors like you. If you enjoy our work, please contribute.

Make your tax-deductible gift today!

 

 

Popular articles