Advertisement

Can Semi-Automated and Fully Automated Cars Coexist?

From Car and Driver

It’s assumed that technological progress in semi-automated cars will terminate in the fully computer-driven machine, and both will need to interact with each other. But is this necessarily the case? Can humans and machines peacefully share the road, and can engineers and programmers figure out how to make that work? Dr. Alexander Hars is a former business professor at the University of Southern California and the founder of Inventivio GmbH, which provides consulting services to companies working on driverless mobility. We asked him to explain how it might all come together.

C/D: How can fully computer-driven cars truly coexist with human-driven ones?

AH: I think that the question of full autonomy is one that you should not see as a self-driving car driving everywhere in the world. In the early stages of full autonomy, it’s likely that fully autonomous vehicles will drive only on known streets, only in areas where we know they can operate. If you look at Waymo in Phoenix, this is exactly what they are doing. There is no reason why we should expect this not to work in the next couple of years.

ADVERTISEMENT

C/D: Humans are very unpredictable, even with fully automated cars confined to one area. Can the lofty goal of near-zero fatalities be achieved with drivers still in the picture?

AH: Driving is not a creative act. If we think about what’s going on, this is information processing. You need to know your environment, you need to understand how the different traffic participants typically behave, to calculate, to predict to a certain extent, but fundamentally it’s information processing, something computers are good at. They err on the side of caution. It will only be possible to let self-driving cars on the road when they are substantially safer than humans, otherwise they would have to make so many obvious mistakes that people would revolt immediately.

C/D: Some automakers have said it will be harder to realize Level 3 automation, which would require human intervention at a moment’s notice, rather than Level 5, which would never involve a driver. What do you think?

AH: Level 3 is fundamentally a wrong idea. It’s very difficult; it decreases safety because it makes driving much more complicated. You have the whole interfacing problem between human and machine, the idea that humans are vigilant. Every safety expert will tell you there’s one thing that humans are not good at: paying attention to something, supervising something for a longer time. It’s just impossible. Level 3 should be avoided and [we should] go directly to Levels 4 and 5.


Auto•No•Mo'•Us: Return to Full Coverage


You Might Also Like