Thinking Out Loud: Artificial Intelligence Comes for Healthcare   

By Chuck Dinerstein, MD, MBA — Jul 21, 2023
While academics explore the bounty and pitfalls that Artificial Intelligence (AI) offers, and Big Tech continues to hype the possible over the actual, the feds seek to make regulations. Corporate healthcare, in all its forms, is fighting back. Should we be techno-optimists, Luddites, or somewhere in between?
Image by Gerd Altmann from Pixabay

A recent article in JAMA offered five possible roles for AI in healthcare:

  • Reduction in rote work – billing, scheduling, office management
  • Complement, rather than replace, physicians – the idea that AI will be a tool to raise the overall quality of their work.
  • Enabling “less expensive monitoring, diagnosis, and personnel needs” – remote monitoring or providing guidance to “mid-level clinicians” (nurse practitioners and physician assistants), making them more like the physicians they replace.
  • To exceed human thinking – “A major priority is to gather large samples of data that are based on ground truth rather than just perceptions of truth.” This would eliminate our bias and erroneous assumptions. But, of course, that will require data collected by those same individuals with bias and faulty assumptions.
  • AI can generate hypotheticals and associations. It cannot identify causality

Given the economic bent of the first three possibilities, it would be no surprise that the author is an economist. The fourth possibility, the potential of Big Data to reshape day-to-day healthcare with algorithms, has been a promise, not an accomplishment, since 2009 when the Health Information Technology for Economic and Clinical Health (HITECH) Act provided $27 billion in incentives for adopting Big Data's repository -- the electronic health records (EHRs).

Best estimates suggest that about 70% of our EHRs are interoperable, where data gathered by one system is shared with another. EPIC, the largest provider of EHRs in the US, fought against interoperability until 2020 before accepting federal rules. But any patient who has used more than one healthcare system or physician can attest that even something as straightforward as a list of current diagnoses may vary. Without an agreed-upon data dictionary so that our words share the same meaning, the value of narrative is limited and cannot be easily “cleaned” for use as data. Moreover, the fundamental role of EHRs, as a billing platform, makes the cutting and pasting of data from one visit to the next guaranteed—garbage In, garbage Out.

The New Regulations

What exactly is required of the new AI algorithms by the Office of the National Coordinator for Health Information Technology (ONC)? In a word, transparency – sufficient enough to consider them trustworthy.

  • To provide users with how predictive models have been trained and tested – that would allow clinicians a better sense of the “ground truth rather than just perceptions of truth” that companies would hype.
  • To assess and disclose algorithmic risk – without knowing the limits inherent in a system, how can they be safely and appropriately used? Corporate entities have a terrible track record in this regard, for example, Boeing and the 737 Max.

“Boeing failed to reevaluate the system or perform single- or multiple-failure analyses of MCAS….737 MAX pilots were precluded from knowing of the existence of MCAS and its potential effect on aircraft handling without pilot command.” 

- Congressional Report on the 737 Max

 

  • To create a way for users, including clinicians, to report problems – This regulatory "ask" on the part of the ONC is insufficient because often, these complaints are not addressed promptly or at all by the software developer. Can mission-critical software wait for the next version to correct problems? And from the viewpoint of malpractice litigation, who will record and maintain the legacy software that might be culpable for patient harm? [1]

Despite the media utterances of these software developers asking for greater regulation, the public comments provided to the ONC reflect concern that enhanced transparency for users would result in an “unfair” competitive advantage and diminished innovation. Consider this from Google’s public comments.

“Assign to AI deployers the responsibility of assessing the risk of their unique deployments, auditing, and other accountability mechanisms as a result of their unparalleled awareness of their specific uses and related risks of the AI system… the organization deploying an AI application should be solely responsible for any disclosure and documentation requirements about the AI application because it is best positioned to identify potential uses of a particular application and mitigate against misuse.” [emphasis added]

Or the public comments of Microsoft,

Advancing robust AI governance also requires further technical and human-centered research into how best to ensure responsible development and use. Priority should be given to the following areas: • Development of evaluation benchmarks and metrics with real-world relevance. …AI issues like accuracy, toxicity, biased output, truthfulness, and groundedness • Explainability. Advancing an understanding of why models are generating the outputs that they do will be an important part of ensuring organizations can be accountable for their use and impact where AI is being used in high-risk scenarios, including to take critical decisions.” [emphasis added]

Do those emphasized words suggest any responsibility for the use of AI on the part of the developers, or is “the organization deploying an AI application” solely responsible? Given that the deploying organizations will be held accountable, partially or entirely, with whom in those institutions will accountability rest? While it is easy to identify healthcare systems willing to embrace AI, there is a real lack of information on how many healthcare systems have AI evaluation and safety governance.

 

[1] Software developers take great pains in shielding themselves from this type of litigation. There is a doctrine of “learned intermediary” where developer risk is limited as long as “adequate instructions or warnings of foreseeable risk” are provided. Moreover, those term-of-use agreements always signed but rarely read include “hold harmless” provisions.

Category

Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author:
ACSH relies on donors like you. If you enjoy our work, please contribute.

Make your tax-deductible gift today!

 

 

Popular articles