Humans must set the rules for self-driving cars

There is currently no substitute for human common sense and responsibility behind the wheel.

By Sandeep Gopalan (Behind The Wheel)

Published: Thu 21 Nov 2019, 9:02 PM

Last updated: Thu 21 Nov 2019, 11:05 PM

Remember Elaine Herzberg? While Ms Herzberg was the first person to be killed by a self-driving car in March 2018, hers is not the first life to be sacrificed in the name of human progress. Automakers, regulators, and the average punter pursuing the liberating fantasy of autonomous vehicles must learn from the final investigation report into Herzberg's death. Here's what and why.
To recollect, on a fine evening in the Arizona desert with no adverse weather conditions, Herzberg was struck and killed by a Volvo SUV operated by Uber as she was jaywalking under the influence of drugs. Understanding why Herzberg died requires knowledge of how self-driving cars work and what went wrong that day.
Uber had equipped a 2017 Volvo XC90 with its developmental automated driving system designed to operate in fully autonomous mode only on pre-mapped designated routes. The SUV had radar, lidar, cameras, sensors, GPS, and data processing systems. It also had an inward camera and tablet for the human operator. Briefly, the lidar emits light and measures distance by receiving reflections from objects - the time interval between a light pulse and reception back is used to compute distance. Similarly, the radar system uses radio waves. The camera system with 10 cameras gives a 360-degree view of the environment. Data from all systems are fused to classify objects and create a constantly updated view of the surrounding environment. Once an object is classified, it is assigned a goal. For instance, an object classified as a vehicle has a goal of moving in the direction of traffic within that lane.
The system drives the car calibrating its motion under this model.
On March 18 2018, the car reached a speed of 44mph 5.8 seconds before the crash. 0.2s. later, the radar system detected Herzberg but classified her as a 'vehicle.' The lidar detected her 5.2 seconds before the crash but classified her as 'other,' and therefore predicted her path as 'static.' 4.2s before collision, the lidar system reclassifies the pedestrian - changing her to 'vehicle.' However, the system has no tracking history for the newly classified object and therefore predicts path as 'static.' Thereafter, during the 3.8s-2.7s timeframe, it alternately classifies Herzberg as 'vehicle' and 'other.' Critically, the car continues to travel at 45mph.
Things change again: 2.6s before impact, the lidar reclassifies her to 'bicycle.' Again, the path prediction is 'static.' One second later, the classification is changed again to 'other'.
However, now the system detects that it is partially in the path of the SUV and plans to maneuver around it. Abruptly, 1.2s before impact, the system reclassifies her as a 'bicycle' and predicts it is fully on the path of the SUV.
Now, the system determines a potential hazard and executes its 'action suppression' process. 0.2s before collision, the system begins slowing down with an auditory alert to the operator. Finally, the operator takes control and hits Herzberg at 39mph.
Too late.
So, what does all this mean? Herzberg was detected a full 5.6s before impact. If the system had correctly classified her as a pedestrian, she would not have lost her life. As the NTSB investigators observe chillingly, the 'system design did not include a consideration for jaywalking pedestrians.' And it did not include a requirement for emergency braking even after determining she was directly in the path of the SUV - merely slowing down 1.2 seconds before collision.
Critically, the Volvo SUV had a collision avoidance system of its own which would have applied emergency braking. However, Uber's ADS had deactivated this because of apprehensions about incompatibility between the two systems.
What are the lessons from this tragedy? There is a considerable distance to go for fully autonomous cars. Whilst the system is capable of detecting objects, classification of the object correctly and predicting its future behavior accurately are subject to errors. The classification appears to change in ways that do not mimic human behavior. It is inconceivable that a human driver's brain would have switched back and forth between 'vehicle,' 'other' and 'bicycle' after observing a pedestrian pushing a bicycle, and continued at the same speed.
The human remains central to the safe operation of automated vehicles for the present.
The NTSB's investigation revealed the human operator was distracted by her cell phone and was inattentive. She was watching a video show. In addition, she was apparently impaired by drugs.
Uber clearly knew about the limitations of its automated driving system. As the NTSB report concludes, Uber's system was not designed to avoid collision when there was 'sudden hard braking of a lead vehicle or an initially obscured pedestrian darting in front.' Uber's system was predicated on a human operator who was 'expected to intervene and take control of the vehicle if the circumstances are truly collision-imminent, rather than due to system error/ misjudgment.'
Uber knew that an attentive human operator was essential to prevent such collisions and mitigate harm. And it was fully aware that its human operators were not always actively engaged in the task of driving and externalized that responsibility to the automated system.
Therefore, Uber's actions in inadequately screening human operators, mitigating against their likely distraction and lack of responsibility for driving were reckless. And its failure to deactivate the ADS when the SUV's internal camera showed the operator was watching TV is inexcusable. Uber was indifferent to the serious consequences for human life from its inadequate risk mitigation processes.
In this tragedy, only one person lost her life. It could have been far worse. Regulators must demand better risk mitigation, mandate data sharing, and impose minimum standards. Finally, Herzberg's death illustrates that regardless of the number of cameras, radars, lidars, and other tech, there is currently no substitute for human common sense and responsibility behind the wheel. Uber must not be allowed to shift those human virtues to machines.
- Dr Sandeep Gopalan is the Vice Chancellor, Piedmont International University, North Carolina

More news from OPINION