Advertisement

When Autonomous Cars Crash, Who's at Fault?

From Car and Driver

Somewhere way off in a future California, a line of 35 computer-driven cars is trundling up Interstate 5 in a tight, wind-cheating formation at 110 mph, each car a mere two inches off the bumper of the preceding car. Now let’s say a tire blows on car 17, or an electronic glitch befalls car 7 and it locks a wheel. Passengers will be rudely awakened from their intravenous-doughnut haze, their quadruple lattes doing the old abstract impressionist bit on the headliner. Let’s hope everyone’s safe.

Or imagine a computer-controlled vehicle puttering around a city, with no one at the wheel, slowing down to pick up a passenger. Then, because of an insecure network, the car gets hacked by the Brotherhood for Authentic Mexican Cuisine, and it sails past the loading zone, whizzing at high speed into the nearby Chipotle. Or imagine a bird sails into the sensor array of a Level 5 vehicle, knocking out the car’s ability to see. Mayhem ensues.

ADVERTISEMENT

Who is to blame? Who gets sued in these scenarios? The tire company? The software maker? The drivers who touched “Agree” on their startup screens? In all cases, we appear to be exchanging one set of understood risks for another more opaque sort. And the implications of these new risks will need to be figured out before driverless cars can operate in the real world.

Automation seems destined to shift this blame from individual drivers onto carmakers or equipment manufacturers, which are characteristically wary of assuming that liability. But law is often based on precedent, and at the moment, there has been precious little litigation dealing directly with robocar crashes. The most infamous case is not even a case yet, as the family of Joshua Brown, the driver killed in May 2016 when his Tesla Model S operating in Autopilot mode struck a semi that crossed its path, has yet to file a lawsuit. But Brown’s crash—the first death attributable at least in part to automated-vehicle technology—can illustrate the complexity in assigning blame in these situations.

Tesla already has been seemingly exonerated by three investigations, one by NHTSA, another by the Florida Highway Patrol, and the last by the National Transportation Safety Board. The reports conclude that the Level 2 Autopilot and automatic emergency braking systems in the Tesla worked as designed and that Brown did not heed Tesla’s general warnings about driver vigilance while using the technology. According to the NTSB report, Brown responded to the system’s specific alerts to take control in a minimal fashion, touching the wheel for a total of 25 seconds during the 37-minute period that the car was under Autopilot control. Yet these findings do not preclude Brown’s family from suing in state court and alleging that Tesla bears some responsibility for the crash.

This access to the court system is one of the foundations of our current framework of product liability, one that many legal experts believe is capable of adapting to this new technology. Proponents of letting the courts sort out the liability issues involving vehicle automation point out that tort law—the redress of harms—has a long track record of managing the complex interactions between humans and the machines we create. These laws have provided strong financial incentives for companies to innovate while still prioritizing the safety of their products, from automated elevators to airplane autopilot systems to industrial robots. But concerns that corporate liability fears might delay the introduction of potentially life-saving driverless technologies have given rise to alternative theories of liability to preempt the tort system.

For now, however, determining fault in crashes involving Level 2 vehicles such as Brown’s Model S will likely hinge on manu­facturers arguing that drivers who are relying on these systems are misusing them. Indeed, even Level 3 vehicles will operate autonomously in such narrowly defined circumstances that responsibility for crashes likely will be handed off to the human just as the car itself hands off control to the driver when those requirements for driverless operation are no longer met. Yet Brown’s attorney still could counter with a number of arguments: that Tesla was negligent in its design of a system that did not shut itself off after repeated warnings to the driver were ignored; that Tesla misrepresented the capabilities of the Autopilot feature, implying even in its name that the system could function like that of a Level 3 or even Level 4 vehicle; or that Tesla should be held strictly liable, meaning that the inherent danger of its Autopilot system automatically implicates the company. Similar arguments will no doubt be made in court as driverless-­vehicle technologies proliferate.

Safety and its attendant liability are such concerns in this burgeoning industry that some manufacturers have disavowed Level 3 vehicles. They have concluded that partial automation is potentially worse than none at all, because humans are ill-suited to take over the controls in the sorts of potentially dangerous situations in which a semi-automated system would abdicate its authority. This rift in the marketplace seems rife for lawyers to probe.

That is, assuming they get their chance. Advocates for tort reform have suggested that federal regulations for computer-controlled vehicles might supersede the state courts, establishing one set of requirements for manufacturers to meet in order to shield them from liability lawsuits. Other proposed remedies include no-fault insurance and an accident compensation fund. Legal experts have also suggested that a cost/benefit approach to liability cases involving driverless vehicles might weigh whatever harms are claimed against the numbers of crashes (and deaths) prevented by the technology, thereby reducing manufacturer ­liability and tipping the scales of justice toward the larger public benefit. In the final analysis, it will be incumbent on manufacturers and automotive suppliers to make their systems as bulletproof as possible, if not for their own bottom lines, then to engender user trust. Expect more run-flat tires in our future.


Risk Shift

This year, consulting company Accenture, together with the Stevens Institute of Technology, published a report stating that the proliferation of computer-driven vehicles will reduce the number of traditional insurance policies and their cost, cutting revenue for insurance companies. The report predicts that insurers will compensate with the addition of cybersecurity, product (software/hardware) liability, and infrastructure policies sold to automated-vehicle OEMs and suppliers, tech companies, and government agencies.


Auto•No•Mo'•Us: Return to Full Coverage


You Might Also Like