Everyone’s A Bad Driver (Except Me And My Autonomous Car)

In the good old days we worried that other humans were amoral instead of worrying that robots are amoral...

In the good old days we worried that other humans were amoral instead of worrying that robots are amoral…

When news broke this week that autonomous cars operated by Google and Delphi have been involved in 12 crashes since they began testing, the reaction was predictably breathless. Ever since the technology was announced, commentators have been obsessed with the technical and ethical shortcomings of the robot chauffeurs that Silicon Valley insists are the solution to the some 33,000 road deaths that take place in the US each year.

As driverless technology continues to advance, these fears won’t simply go away; on a psychological level, humans seem wired to fear anything that diminishes our sense of control, even if that sense of control is an illusion. This psychological barrier, irrational though it may be, demonstrates a crucial reality of the transition from cars to autonocars: developing technology that improves on the dismal safety record of human drivers is far easier than re-organizing social and individual values that have evolved over the hundred-year history of the automobile.

In many ways this panic over the amorality and risk posed by autonomous cars reflects the debates that gripped the United States at the advent of the automobile. For early automotive reformers, steeped in the Social Darwinism of the early 20th Century, the presence of “flivverboobs” and “motor morons” on public roads stood out as proof that large portions of the population were either morally or physically incapable of safely operating the strange and terrifying new machines populating American roads. For a broad number of reasons (1), early auto safety reformers wrote off a percentage of the driving public as irredeemably amoral, but quickly gave up on trying to “fix” the anti-social nature of these most visible agents of on-road danger. As the initial chaos unleashed by motorcars evolved into a more normalized –yet still highly unsafe– dynamic, reformers came to realize that road safety was a far broader problem than just an incurable minority of terrible drivers, and focused on improving automotive engineering, education and enforcement.

This pivot hardly eliminated the problem of on-road danger; indeed, auto safety would not begin to consistently improve until the federal government began more forcefully regulating automotive safety in the 1970s. But by improving car control, building up a road infrastructure that encouraged safer driving and beefing up the consistency and enforcement of the rules of the road, early auto safety reformers helped normalize the risks drivers faced in the early days of the automobile. Long before ramped-up regulation long resisted by the auto industry actually began consistently reducing on-road deaths, Americans became inured to the staggering risks they faced every time they got behind a wheel.

Today, autonomous cars face a similar challenge: their novelty infects the public with deeply irrational fears about their amorality. But by focusing on the character of these new robot drivers, we ignore the broader context: that autonomous cars have an extremely low bar to clear in order to improve on the performance of human drivers. This moral panic is obvious in the breathless reactions to this week’s news: while headlines and commentators decry the secrecy surrounding autonomous car crashes and frame these failures as evidence of the immaturity of the underlying technology, a closer reading of the news reveals that only two of the four crashes that took place since September of last year actually took place while a human driver was not in control and that all four took place at speeds below 10 MPH. And because regulations on autonomous car testing require the human drivers be trained and approved by the state, the real story here is that even drivers with above-average driving skills can not consistently keep themselves safe from on-road collisions.

The psychology of our autonocar moral panic is not difficult to discern: no human could drive as much as most Americans drive while remaining conscious of the fact that they are taking on massive risk of expense, injury and death. The old chestnut that says you are more likely to die driving to the airport than in an airplane accident rarely causes anyone to feel less fear while flying than they do behind the wheel. This suggests that ubiquity, rather than mere statistical safety, will quiet our fears of about giving up control to robot chauffeurs. Only by regularly flying does the lack of personal control cease to inspire fear in the airplane passenger. Similarly, only by gradually giving up control of our cars will we cease to fear the statistically far more competent autonomous cars.

Thus the gradual approach to autonomous vehicle technology championed by established automakers gains the upper hand on  firms like Google, which seek to create an self-driving car revolution. Having conditioned drivers to ignore the very real danger presented by their statistically terrible driving skills, automakers are perfectly positioned to ease drivers into the autonomous age by slowly adding more and more “driver assistance” technology into its cars. Statistically speaking, retaining the element of cars that is responsible for more than 90% of “accidents” (you, the human driver) is a terrible way to improve auto safety (2). But, if history is a reliable guide, the irrational social and psychological barriers to autonomous cars may end up being far more challenging than the technological ones.

This is where Silicon Valley’s mantra of better living through engineering falls flat. Though I have every confidence in Google’s ability to engineer a self-driving car that vastly improves on the safety record of human drivers, its efforts at overcoming the public’s psychological barriers to autonomy remain deeply lacking. By treating the media and regulators as ignorant interlopers, Google betrays a self-destructive overconfidence in its engineering prowess that only deepens the public’s innate distrust of its robot creations. Instead of dismissing the public’s fears with statistics alone, Google and others must embrace the fact that the burden of proof is on them and that irrational anxieties can only be displaced by steady engagement with society’s watchdogs. For autonomous cars to succeed, the key disruption needs to take place in our coding, not theirs.

(1) See: “Hell on Wheels: The Promise And Peril of America’s Car Culture, 1900-1940” by David Blanke for a thorough discussion of this topic.

(2) Indeed, Nicholas Carr’s “The Glass Cage: Automation and Us” convincingly argues that partial autonomy creates unique human-machine interface challenges that can actually reduce safety.