Supreme Court Justice Roberts is concerned about computer systems making judgments to assist judges in court. Concerned enough to ask a federal court to file a "friend of the court" brief on whether the Supreme Court should hear a case involving a man convicted of a crime whose sentencing was informed by a software program designed to evaluate his risk of recidivism.
The man appealed his conviction saying “his right to due process was violated by a judge’s consideration of a report generated by the software’s secret algorithm, one Mr. Loomis was unable to inspect or challenge.”
Wisconsin's Supreme Court denied the appeal and the friend of the court brief stated that the algorithm could be used. “They must take account of the algorithm’s limitations and the secrecy surrounding it, she wrote, but said the software could be helpful “in providing the sentencing court with as much information as possible in order to arrive at an individualized sentence.” The company involved said, “The key to our product is the algorithms, and they’re proprietary,” one of its executives said last year. “We’ve created them, and we don’t release them because it’s certainly a core piece of our business.”
Wisconsin’s Attorney General pointed out that “Loomis knew everything the court knew. Judges do not have access to the algorithm, either, he wrote.”
Algorithmic medical care is already here. The use of computers to assist in diagnostic activity has been with us for many years, for example, most EKG machines automatically print a series of diagnosis which is subsequently ‘over-read’ by the billing cardiologist. But, in general, those algorithms can be understood; this is not the case for algorithms based on machine learning where even the programmers do not know how the program is making decisions.
The developers involved in Google’s competition to beat the best in the world at the game GO (the computer won) have no idea how the computer did it. Similar artificial intelligence systems are being developed for analysis of imaging in mammography and examining the retina for diabetic retinopathy. While not yet clinically deployed, they will be. That is why the court's approach to the problem is important.
It seems that
- The courts believe these programs are helpful in gathering “as much information as possible” and arriving at “individualized sentence.” Will similar medical systems be considered as useful in gathering information to inform a personalized medicine’s diagnosis or treatment? Will they become a standard of care?
- That the courts allowed companies to maintain the propriety nature of their algorithms. Will they be free of liability when diagnosis and treatment are incorrect? After all, how can a reasonable person explain a decision cloaked in proprietary secrecy and not understood by its developers?
- While patients will know everything, the doctor knows, at least regarding the diagnosis provided by algorithms, that liability will not be equally shared.
Algorithmic medicine does and can improve the uniformity, and therefore quality, of our work. But there is a conflict between privacy, security and the best care of patients. The New York Times, no surprise, wants the private sector out of the algorithms, arguing government can develop their and allow defense lawyers to evaluate them.
Because physicians are being told to incorporate electronic health records into their practices and are distracted by the rapidly changing methods of payment as we switch from fee for service to global fees, bundled care, etc. there is no ongoing discussion of algorithmic medicine - black box medicine, if you will. A concern that should be that these systems, glittering objects of cutting edge technology, will be foisted on physicians and patients by government with little real-world consideration.