AI in healthcare

Electronic Health Records (EHRs) were once hailed as the solution to streamlining healthcare processes, but their implementation has brought forth a host of challenges. From increased work burden and clinician burnout to facilitated medical errors, the journey of EHRs has been tumultuous. With billions invested and a staggering increase in adoption rates, we find ourselves retrofitting the system, but this isn't just about optimizing technology; it's about preserving the heart of healthcare.
As developers and health systems embrace artificial intelligence-powered software, a pressing question emerges: Who bears the burden when these innovations inadvertently harm patients? And especially when legal precedent offers only faint guidance. Let's take a look.
Everyone is using, embedding – or about to use and embed – Artificial Intelligence in their work. I am not so concerned about the imminent arrival of SkyNet; those bits and pieces are already in place. What concerns me more is that A.I., already a misnomer, will increasingly become real stupidity and hurt patients along the way.
Artificial Intelligence, or AI, including those large language systems (e.g., ChatGPT), is gaining much traction. When “teaching for the test,” one system passed the U.S. Medical License Exam – a three-component test that's required in order to earn an MD degree. Will doctors be among the first white-collar (white-coat?) workers to be replaced by automation?