Our streets should not be a laboratory.
The recent death of an Arizona woman hit by a self-driving Uber car is a sobering reminder of the risks of this developing technology. It has also highlighted the fact that few safeguards protect the public as city streets become proving grounds for driverless cars, amid intense competition between tech giants and traditional car companies to bring these once-futuristic vehicles to market.
You might think the federal government requires the makers of self-driving cars to prove their street-worthiness before allowing them to test drive on public roads. That’s not the case. But it should be.
Mayor Steve Adler often touts Austin as the “Kitty Hawk of self-driving cars.” It was here in Austin in October 2015 that a Google prototype safely piloted a blind man to a park in the first real-world trip involving a fully automated car. Google, which launched the self-driving car project now known as Waymo, voluntarily showed city officials its project and kept Adler’s office in the loop on its 2015 test-driving efforts in the Mueller neighborhood. Human drivers were ready to take over if anything went awry.
Finding out who’s on the road now is another story. As American-Statesman reporter Ben Wear recently explained, a state law passed last year does not require companies to tell Texas or local governments when they are putting self-driving cars on streets or highways with no human in control.
Senate Bill 2205 allows self-driving cars on Texas highways, as long as they meet any federal regulations and the manufacturers deem their own creations capable of complying with the rules of the road.
But the National Highway Traffic Safety Administration, which sets the standards for vehicles driven on American streets, hasn’t issued any regulations on self-driving cars. Not wanting to stymie innovation, the agency has simply provided suggestions for car makers to consider as they develop the automated technology.
That’s fine for prototypes being tested on computer models and closed courses. But if car makers want the benefit of test-driving in our communities without a human ready to take the wheel, they should be required to demonstrate to NHTSA that the cars can handle essential tasks, such as identifying pedestrians, discerning when to stop for a bus or pass it, and recognizing the hand signals of a police officer directing traffic.
NHTSA’s current emphasis on voluntary standards — basically allowing companies to hit the streets whenever they believe they’re ready — fails to account for the obvious gaps in readiness between different self-driving cars. Reports submitted to California last year indicate Waymo’s self-driving cars went about 5,600 miles, on average, between uncertain or dangerous situations requiring a human to take the wheel. By contrast, Uber set a goal of just 13 miles between human “interventions” — and struggled to meet even that standard in its recent testing in Arizona, the New York Times reported, based on internal documents.
Looking at those track records, would a regulator feel equally comfortable letting both of those cars steer themselves on public roads? It’s doubtful. But at this point, no one’s asking that question in Texas or at the federal level. And only some states are even collecting the data.
Regulating car safety features is the responsibility of the federal government. NHTSA should create a certification process that self-driving cars must complete before they can share the road with the rest of us. Legislation that passed the U.S. House of Representatives, and is still being hashed out in the Senate, could kickstart the process, if NHTSA has any doubt about its authority to safeguard the public. That law could also address important questions over protecting self-driving cars from malicious hacking and establishing who can access the personal trip data generated by these cars.
Texas lawmakers who overlooked common-sense safety measures last year when opening state roads to self-driving cars can fix that next session. They should require self-driving cars to obtain NHTSA certification or have a human operator ready to take over driving at a moment’s notice. They should also require companies to announce when they are test-driving in an area and share their data, so the public can see how driverless cars are performing.
Proponents of self-driving cars often present an idealistic sales pitch that unlike humans, this technology never gets tired or distracted. Self-driving cars, they argue, remove the factor of human error that leads to nine out of 10 crashes.
But it’s clear this developing technology, while incredibly promising, is not perfect. The tragic death last month of a woman crossing the street in Tempe revealed Uber’s self-driving car failed the essential function of detecting a pedestrian — and the human on board wasn’t prepared to intervene.
Elsewhere, two drivers of Tesla cars have died while their partially automated, commercially available vehicles were on autopilot. In a 2016 crash in Florida, the car failed to detect a large white truck against the bright sky. Investigators are still studying the second crash, a fiery collision last month with a highway barricade in California.
Even the smart features on widely available cars aren’t foolproof. As he urged the Texas Senate Committee on Transportation last year to take a cautious approach with self-driving cars, retired San Antonio mechanical engineer Don Dixon noted the blind spot detection system in his new car is helpful, but it doesn’t work in the rain or when a bright glare hits the sensors.
“As an engineer,” he said, “I have yet to see anything that will not fail.”