(Bloomberg) -- Self-driving cars are supposed to be our salvation, drastically reducing the 1.3 million road fatalities worldwide each year by replacing humans with robots who can do precision piloting and never get distracted, drowsy or drunk. So why did a self-driving Uber SUV being tested in Tempe, Arizona, strike and kill a woman March 18 without so much as hitting the brakes? The accident will only drive more anxiety about autonomy given that 63 percent of Americans already say they’d be afraid to ride in a fully self-driving car, according to a AAA survey. As experts and investigators pore over the Arizona fatality to determine the cause, here’s a look at some factors that may determine how safe robot rides may, or may not, be.
1. How do self-driving cars work?
Driverless cars “see” the world around them by using data from cameras as well as radar and lidar -- sensors that bounce laser light off objects to determine their shape and location. High-speed processors crunch the data to create a 360-degree picture of lanes, traffic, pedestrians, signs, stoplights and anything else in the vehicle’s path. That’s supposed to enable the vehicle to know, in real time, where to go and when to stop.
2. Why do engineers think they’ll be safer?
U.S. safety regulators say 94 percent of crashes can be blamed on human error. And with smartphones distracting more drivers, U.S. road deaths surged 14 percent from 2014 to 2016. Meanwhile, robots are always alert and focused on the driving task. And advances in artificial intelligence can combine with on-road experience to make cars become even safer drivers over time.
3. But I’ve heard of other accidents. What are the problems?
Robot drivers are strict rule followers. This has caused many human drivers to rear-end slow-moving or stopped self-driving cars, which account for most of the accidents. The death in Arizona was the first fatality involving a fully autonomous car. In Florida in 2016, the driver of a Tesla Model S who was operating the car in its so-called “Autopilot” mode died when his vehicle collided with a truck that turned into its path. The driver-assist system required the human driver to remain alert and engaged in the driving process. Federal accident investigators found Autopilot’s design contributed to the crash, as did the actions of both drivers in the incident. The automaker made modifications to require more driver input.
4. What happened in Arizona?
That is still being investigated. Police reports and video of the crash suggest a 49-year-old woman pushing a bike crossed a road in Tempe outside of a crosswalk around 10 p.m. Neither the self-driving sensors, nor the human safety driver, appeared to notice her and the car collided with her at 38 miles per hour without touching the brakes. She was rushed to a hospital, where she died of her injuries.
5. Was that a fluke?
That’s what’s being investigated. Autonomous vehicle experts say the car’s sophisticated sensors should have detected her and the system should have hit the brakes. But self-driving technology still struggles to distinguish pedestrians from inanimate objects like bushes or poles or bags that blow into the road. Also, in the cockpit video released by police, the human safety driver appears to be looking down before the crash, which may have prevented the pedestrian from being seen until it was too late.
6. How big a setback is the crash?
So far, Uber appears to be suffering the biggest setback. The company suspended its autonomous testing program, which had already racked up millions of miles. Arizona’s governor has banned Uber from operating autonomous cars on the state’s roads indefinitely, putting its ambitious plan to offer a robot taxi service to the public by year-end in jeopardy. The head of Waymo, a unit of Google-parent Alphabet Inc., said its autonomous cars would have recognized the pedestrian and stopped. Uber’s own tests have required more intervention by human safety drivers than in testing conducted by Waymo and others, according to a New York Times report. Still, there will be broader fallout. The fatality is likely to amp up anxiety about autonomy that all makers of self-driving cars are working to overcome. Toyota Motor Corp. and companies operating in the city of Boston have temporarily suspended autonomous car testing on public roads while the Uber accident is investigated.
7. Who decides if cars are safe enough to drive themselves?
For now, individual U.S. states have their own regulations for how autonomous cars can be tested on their streets. The federal government has not yet come up with rules for the road for self-driving cars. The U.S. House approved a measure in September that would allow thousands of automated vehicles to hit the roads while federal regulators develop safety standards. A similar bill is stalled in the Senate where Democrats are pushing for tougher oversight and consumer protections.
8. What happens next?
Until the cause of the Arizona fatality can be determined, safety advocates have called on the autonomous industry to tap the brakes on deploying vehicles on public roads. But many companies’ plans are still moving forward. Waymo already is offering commuters in Phoenix the ability to hail a Chrysler minivan without anyone behind the wheel. Audi expects to soon begin selling a version of its A8 sedan that can take over completely in traffic jams. And next year, General Motors Co. has promised to put robot taxis into service, with no human chaperone.
The Reference Shelf
- A Boston Consulting Group report, “Revolution in the Driver’s Seat,” looks at the technical and non-technical issues surrounding autonomous cars.
- A New York Times Magazine article, “Death by Robot,” on the ethical questions involved in driverless cars and other computer-controlled technologies.
- Two journal articles examine legal and ethical questions concerning driverless cars.
©2018 Bloomberg L.P.