Machine Bias - The Algorithm Made Me Do It

Related articles

Algorithms are a set of rules or calculations that computers use in making decisions. Unlike neural network programs, which often are too complex for its programmers to understand (implicit), explicit algorithms are code that can be understood. Algorithms are everywhere and have become the topic of an on-going series by Pro Publica. Pro Publica series began by concentrating on a program written to advise courts on recidivism. But given the ubiquitous nature of algorithms, they have moved on to look at what big data and it's algorithms know about you from Facebook and Amazon.

As a health professional, I wanted to consider algorithms applied to medical care. A trusted source, federal government’s Agency for Healthcare and Research documented 17,000 algorithms and programs as long ago as 2011. Today a quick search will locate several apps in iTunes with medical algorithms to download.

With the growth of personalized medicine, algorithms are being offered and applied for clinical care. Algorithms come in two forms, explicit - meaning the decision-making rules and calculations are there for all to see and implicit - 'black box' process that no one can explain. For example, the American College of Surgeons NSQIP Surgical Risk Calculator. Yes, it comes with a disclaimer, and it is incumbent upon a knowledgeable physician to apply and share the information as they see fit. They maintain responsibility. This calculator is a form of explicit personalized medicine. The underlying data is available, the connections between the patient variables and outcome and known if not well understood. Another example is the use of estrogen and progesterone receptors and human epidermal growth factor receptor 2 (HER2) in determining the treatment of patients with breast cancer. Anyone viewing the algorithm can understand it. 

Increasingly personalized medicine's future lies with implicit machine learning algorithms - the ones where no one knows how the calculation was made - “black box medicine” This was the topic addressed by Harvard Journal of Law & Technology. The learning model underlying these programs are to take a collection of similar individuals and construct predictions and recommendations on what their outcome is. We are all familiar with this model and see it every day - It is the basis for recommendations by Amazon, and like Amazon’s algorithm, it is not necessarily transparent. The Harvard article discusses some underlying legal issues.

First and most importantly, liability for using the algorithm is not considered. This is a very unexplored and perhaps contentious area of the law - vaccine and device manufacturers are directly shielded from liability already. Algorithms are now considered medical devices subject to FDA approval. How will they be classified, as Class II moderate risk only requiring premarket approval as equivalent to physician judgment or as Class III - so new that full premarket approval (and the associate cost and time) are required? Because once approved the algorithm will be a protected medical device, the manufacturer will not be held liable for its use; that will fall to the physician or health system. That will be the day that the algorithm made me do it.

Second, there is the issue of adoption - will physicians and patients trust these black box algorithms in making decisions? As Arthur Clarke stated: "Any sufficiently advanced technology is indistinguishable from magic." Are we coming full circle and the doctor becomes a shaman once again?

Finally, in discussing the cost to develop these systems the article addressed how the investment costs would be recovered and the work monetized. Currently, algorithms are protected by patent, but this is difficult for healthcare because 'laws of nature, natural phenomena and abstract ideas cannot be patented". Moreover, if the algorithm were to change over time, as biological systems do, the protection would be lost. Instead, the authors suggest that the laws governing trade secrets might be the best means of protection. "Knowledge that is reasonably kept secret which derives independent economic value from its secrecy is protected from misappropriation by state and federal trade law."

Automation of medicine is underway. One of the significant errors by the labor movement in the face of industrial automation was to allow their new tools to be designed by others who created machines to supplant rather than augment workers. The physician should learn from history and help design and create these new tools.