Patty the Pedestrian vs. Sandy the Self-Driving Car

Five years after the driver of Uber’s autonomous car killed a pedestrian, the driver pleaded guilty to one count of reckless endangerment and was sentenced to no prison time, just three years of supervised probation. The law is designed to fill a deterrence function and mete-out punishment for wrongdoing. So, did the law serve its function here? Does the law appropriately address these new technologies?
Image by Julien Tromeur from Pixabay

Criminal law is designed to protect the populace (and the state) from wrongdoings by punishing those who committed bad acts, typically with prison time and/or a fine, as a deterrent to those considering such activities in the future. The civil action for negligence provides a similar deterrence function but also affords a remedy, “restorative justice,” to the individual (or their family) against wrongdoers.

While money can never replace a human, monetary awards do afford some solace. And then there is an element of vengeance, which has a role, though enlightened lawyers don’t like to discuss it. Negligence actions also set a standard of care for acceptable conduct in today’s society.  So, did the family of the pedestrian, Elaine Hertzberg, receive its due? Was some societal message sent to drivers of AI-powered vehicles about what constitutes acceptable conduct? Do we know what standard of care will be deemed responsible behavior?

Human versus algorithm

All eyes focused on the car’s driver, Rafaela Vasquez, and blinked at the involvement of Uber – which instituted and ran the driverless-vehicle program. First, the county attorney cleared Uber of criminal wrongs, giving Uber the green light to continue its testing program, which it promptly did (before shutting down the unit). Next, Uber quickly and quietly settled the civil action for an undisclosed sum, sweeping any social message off the popular radar.

Rafaela Vasquez’s plea deal prevents her from the blame to Uber, as her lawyers planned on doing, exposing the failures which the National Transportation Safety Board identified. And there were plenty of deficiencies:

  • The technology’s failure to identify Hertzberg as a pedestrian, and thus failing to apply the brakes,
  • Uber’s inadequate safety culture
  • Uber’s loosened requirements that there be two test pilots in each car (which kept the drivers more alert and compliant with the no cell phones policy)
  • Solo drivers were assigned the same monotonous routes on hours-long shifts.

The NTSB reports that driver distraction was the probable cause of the accident. Vasquez was alleged to have been watching The Voice on her cell phone against company policy. [1] Vasquez claims she was monitoring company systems on her handset as she was asked to do. Confusingly, she admitted in her plea deal that even this monitoring was reckless conduct. It seems to me she is willingly covering for Uber, inexplicably accepting personal responsibility for bad company policy when she pled guilty to reckless endangerment, That crime, defined as “recklessly endangering another person with a substantial risk of imminent death or physical injury,” if her story is to be credited, is a distraction required by company policy.

The complainant here may or may not have been the most sympathetic of plaintiffs. Autopsy reports indicate blood levels of controlled or illegal substances. Toxicology reports found methamphetamine and marijuana. These findings would have reduced any damage award by an amount designated by a jury to account for her contributory negligence. Reports indicated the victim was homeless but close to getting off the streets, with friends describing her as someone who cared for those around her. She was reportedly known in the homeless community as “Elle” and “Ms. Elle.”  A skilled lawyer could have exacted a handsome award for the Hertzbergs and a powerful message for society. Alternatively, such a person may not be someone a prosecutor might want to focus on.

Much of the debate on the issue centers on the argument that AI-powered vehicles are, number for number, safer than human-powered cars. Technophiles note that human driving error kills more than 40,000 people annually, a far cry from the AI record. Government officials, by contrast,  argue the technology isn’t yet ready for human consumption or safe. So, the argument is framed  - as an imperfect technology, but one safer than humans and better than the status quo.

The Calculus of Precaution

These arguments are off-point, at least under basic negligence theory. Once Uber voluntarily assumed the role of providing autonomous driving, it was incumbent on them to provide the safest form that a reasonably prudent person or company would offer. It doesn’t matter that the basic technology might be safer than an “ordinary” human.  If the technology could be made still safer, and it wouldn’t be unduly burdensome to make it so, then, according to the famous formula of Judge Learned Hand, it was incumbent on them to do so. [2] This is the standard of care we expect from an ordinary prudent person or company. Failure to act prudently in a manner that causes an accident renders them liable.

We can look to the training of airline pilots to provide some examples of what should be reasonably required, as my friend and colleague Chuck Dinerstein has written here and here. Upgraded training and periodic certification, the standard for similar technology used in planes, should be essential. This is something government should and could legislate. Without legislation, a private negligence suit achieves a similar result, casting the complainant as a “private attorney general.” An angry jury renders high awards, juggernauted by punitive damages, which often reach the monetary stratosphere – a potent deterrent of bad company conduct if the awards are high enough.

Perhaps the most significant safety omission was Uber’s failure to address driver “automation complacency” – our tendency to focus less on automatic processes that demand little human input. Layering this well-known phenomenon on top of “highway hypnosis” creates a potentially and explosively dangerous situation. It being foreseeable that the Uber drivers would succumb renders Uber charged with prevention.  Even a chime every so often or programming the seats to vibrate at random intervals might have jolted the driver from complacency. Having them call a central station and talk to a monitor might have helped.

Settlement of the civil matter, perhaps even more than prosecutorial disinterest, foreclosed essential inquiry of what should be required in this age of new tech.

“I don’t want the story of the first automated vehicle fatality to be a lie. Or be a matter of disputes. We should get answers.”

 – Bryan Walker Smit, Professor of Law U. S. Carolina.

There is much to be learned from this scenario. AI technology is infiltrating all aspects of our lives. At a recent law professors conference, the session on ChatGPT, the majority of legal writing instructors and law librarians favored teaching law students to integrate ChatGPT into their regular legal research, noting, of course, the “Law-Bot’s” tendency to “hallucinate,” e.g., to make up citations. “Hallucinate” is the charitable appellation levied against the Bot; were the act done by a human, it would be called fraud or misrepresentation.

While the students would be taught to verify the Law-Bot’s work, no one mentioned the tendency of students to succumb to “automation-complacency,” which we can surely expect in any profession that invites automation into its domain. Automation complacency (reducing human involvement, interest, and concern) is an affliction we need to be mindful of and institute steps to prevent, not just in driving but in any endeavor we share with the algorithms.

 

[1] She was allowed to listen to the program.

[2] The formula, espoused in US v. Carroll Trucking, essentially states that a person or entity is negligent if the burden of taking precautions is less than the probability of harm multiplied by the potential magnitude of the harm. In other words, if the cost of preventing harm is less than the risk of harm multiplied by its potential severity, then one should take reasonable precautions to avoid being considered negligent.

Category
ACSH relies on donors like you. If you enjoy our work, please contribute.

Make your tax-deductible gift today!

 

 

Popular articles