High-Risk Artificial Intelligence

By Chuck Dinerstein, MD, MBA — Oct 27, 2021
Once, a long time ago, it seems, individuals used rules-of-thumb, fancy name heuristics to navigate transactions – social or commercial. As the scale of our interactions grew, rules-of-thumb gave way to algorithms, which were, in turn, unleashed to create new algorithms based upon artificial intelligence. Somewhere along the way, those artificially intelligent algorithms became dangerous. What is high-risk artificial intelligence? Spoiler alert – it is already upon us – welcome to our version of Skynet.
Image by Gerd Altmann from Pixabay

As with many things digital, the EU has led the way, in privacy protections, in trying to define the ethics of artificial intelligence, and now trying to define what is high-risk. The EU’s Artificial Intelligence Act was introduced earlier this year and is now in the hand of the European Parliament. For the EU, a high-risk is an algorithm that can harm one’s health or safety or infringe on one’s rights – that is a very broad mandate.

Among the systems considered high-risk would be facial recognition, algorithms that recommend medical intervention, autonomous cars, and geotagging. (If you want a better understanding of how geotagging may or may not invade your privacy, take a look at this article from Wired.)

Like the EU’s privacy rules, the rules will apply to all companies doing business in the EU, which means what is decided will influence the behavior of the same companies in the US. What happens in the EU does not stay in the EU. US companies were the fourth-highest commenters on the legislation after comments from Belgium, France, and Germany.

These high-risk algorithms, once identified, will face more scrutiny. Specifically

  • Creating and maintaining a risk management system for the entire lifecycle of the system;
  • Testing the system to identify risks and determine appropriate mitigation measures, and to validate that the system runs consistently for the intended purpose, with tests made against prior metrics and validated against probabilistic thresholds;
  • Establishing appropriate data governance controls, including the requirement that all training, validation, and testing datasets be complete, error-free, and representative;
  • Detailed technical documentation, including around system architecture, algorithmic design, and model specifications;
  • Automatic logging of events while the system is running, with the recording conforming to recognized standards;
  • Designed with sufficient transparency to allow users to interpret the system’s output;
  • Designed to maintain human oversight at all times and prevent or minimize risks to health and safety or fundamental rights, including an override or off-switch capability. [1]

I couldn’t agree more, especially to being maintained and being “sufficiently transparent to allow users to interpret” the system.

The objections were primarily from businesses, the defense mostly from rights advocacy groups – no surprises. Of course, knowing the objections can tell you a lot about the company. For example, Facebook was concerned that it might regulate “subliminal techniques that manipulate people extends to targeted advertising.” That, by itself, should tell you much more than the recent Senate hearings. The credit agencies and credit card companies were concerned that high risk includes creditworthiness.

When you look at the objections, they sort themselves out into two essential categories. The easiest to identify is cost. All those requirements to document and maintain cost time and money – neither of which add to the bottom line. These cost concerns have funded a cottage industry in predicting the costs of companies. As you might anticipate, the regulators' estimates are lower than the business models. These predictions will create many white papers and no doubt a few academic articles, but rest assured, any costs, high or low, will be passed along to the consumers.

The second objection is far more critical, and it revolves around liability. NEC initially raised the objection, a company heavily involved in facial recognition, concerning “an undue amount of responsibility on the provider of AI systems.” These are tools; if you misuse them, then it is on you. [2] That thinking, by the way, has been the case in the US, at least for autonomous aircraft flight systems since the ’30s. The preponderance of blame for accidents lies with pilot error, not the flight systems. (Humans have been found responsible when they turn the systems on, and when they turned the systems off)

Liability is, to a large degree, what our current discussion over social media is all about. It is not about censorship, although that is a concern. It is about an algorithm that optimizes what you see to hold your attention and serve up ads. Remember, Facebook’s concern is about targeted advertising – their business model. How do we control for high risk? The EU is looking at the algorithms themselves and asking companies to be transparent in their creation, maintenance, and use. But, that is difficult for proprietary algorithms and even more difficult when “machine learning” gives your desired outcome, but you cannot explain how it was reached. The Congress is looking at who controls the algorithm; is it time to break up Facebook? We did such a great job with Standard Oil, US Steel, and the Bell System. Microsoft was only “winged,” it was found guilty of monopolistic practices but not broken up.

 

[1] The Artificial Intelligence Act: A Quick Explainer Center for Data Innovation

[2] There are very few software systems that are genuine tools meant for other developers to use in training and developing artificial intelligence. How to manage these general systems is a separate question.

Based on Wired’s The Fight to Define When AI Is ‘High Risk’

 

Category

Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author:
ACSH relies on donors like you. If you enjoy our work, please contribute.

Make your tax-deductible gift today!

 

 

Popular articles