Children can be difficult for autonomous vehicles to identify for several reasons. They're smaller, they behave more erratically than adults, and they can even be carried inside of other objects, such as strollers. Because of that, it'd be reasonable to expect AVs to exercise more caution near anything that might be a child, rather than less. However, according to The Intercept, GM's autonomous robotaxi division, Cruise, kept its vehicles on the road despite their inability to recognize children as effectively as adults.
Cruise internal materials reviewed by The Intercept reportedly show that the company was aware of its AVs' inability to adequately recognize children, yet continued moving further with service expansions. This news comes just weeks after the California Department of Motor Vehicles (DMV) revoked Cruise's AV taxi permits after one of its cars struck a woman who had already been hit by another vehicle and dragged her 20 feet down the street. The company was discovered to have withheld key footage of that incident from state regulators.
Cruise publicly recognizes that accidents are impossible to avoid completely, regardless of whether the vehicle is manned or unmanned. In response to The Intercept's piece, Cruise personnel said that the company emphasizes caution around children.
"We have the lowest risk tolerance for contact with children and treat them with the highest safety priority. No vehicle—human-operated or autonomous—will have zero risk of collision," said Eric Moser, Cruise's director of communications.
However, internal documents reportedly say otherwise. A previously unreported internal safety assessment determined "Cruise AVs may not exercise additional care around children," making it clear the GM self-driving unit is apparently aware that its vehicles need to improve at distinguishing kids from adults. It's a concern apparently reflected in the firm's own simulated tests. "Based on the simulation results, we can’t rule out that a fully autonomous vehicle might have struck the child," said one assessment, according to The Intercept.
Most AV technology—both in fully autonomous robotaxis and in automakers' advanced driver aid suites—use some form of machine learning to improve as they encounter new scenarios. However, according to Phil Koopman, a Carnegie Mellon engineering professor who specializes in AV safety, machine learning is a fundamentally flawed method for safety technology because it only learns after something bad has happened.
“If you were only training it how to handle things you’ve already seen, there’s an infinite supply of things that you won’t see until it happens to your car. And so machine learning is fundamentally poorly suited to safety for this reason," said Koopman.
It isn't encouraging to hear that Cruise willfully ignored its vehicles' inability to recognize children and behave accordingly, especially when Cruise has a history of technical issues and has covered up failures in the past. Thankfully, the California DMV shut the company's AV operations down for the time being, so these issues can hopefully be worked out in the interim. If robotaxis are going to be driving around on their own, they need to be at least as aware of their surroundings as the average driver. Right now, that doesn't seem to be true of GM's service.
Got tips? Send 'em to firstname.lastname@example.org