i3 | May 18, 2020

What's Holding up Self-Driving Cars?

by 
Robert E. Calem

It’s 2020, the year pundits predicted self-driving cars would be mainstream. But those predictions have proved premature. What happened? Experts say it has been harder than anticipated to meld all the elements needed to realize the dream, including solving technology, regulatory and business model puzzles.

Yet hold on, they say, the promise will be kept. The new prophesy is for self-driving cars to be conventional by 2030.

A Mélange of Misses

“The reality is that development is still moving full steam ahead, but we’re doing things that have never been done before. It is proving to be more complex, it’s more challenging and it’s high risk,” says Danny Shapiro, senior director of automotive at semiconductor maker NVIDIA Corp. in Santa Clara, CA. “What we’re trying to ensure is that we achieve the highest level of safety that we possibly can,” through technology and other means. “It ultimately comes down to people believing that the vehicle will be safe.”

There are four challenges facing self-driving car development today, says James Hodgson, principal analyst for smart mobility and automotive at ABI Research in Wellingborough, U.K. Those are technical (“the perception stack”), the regulatory environment, verification (“to demonstrate robustly in a formalized way that your vehicle is safe”), and business case reconciliation.

“Ultimately there’s an extraordinary amount that can be achieved with camera sensors alone, to have a good semantic understanding of what’s happening” around a vehicle, Hodgson says. “But there are still a few blind spots,” he adds. While there are companion sensors competing to augment cameras, the artificial intelligence (AI) and “deep learning” that makes sense of what’s seen is not always sufficient.

“A couple of years ago there was a big push in AI and deep learning and everyone recognized how potent that approach could be in radically improving the quality and reliability” of a self-driving car’s perception and identification “with the kind of computing power that we have these days,” Hodgson recalls. Nevertheless, “this attitude that we had of ‘trust the magic’” has disappeared with some accidents, and presently “no regulator is going to take your word for it. You have to formally prove a self-driving car’s safety. And it’s very difficult to do that with AI, due to the actual training process is probabilistic, it’s not deterministic.” A lot of software development remains to be done to make it verifiable, he says.

There are four challenges facing self-driving car development today. Those are technical, the regulatory environment, verification and business case reconciliation.
James Hodgson ABI Research

Hodgson says, the cost of adding self-driving systems to cars has kept them out of mainstream vehicles. He pegs the price to an automaker for SAE Level 4 implementation (the ability for a car to drive itself under certain conditions) at $4,000 “at scale,” making it economically feasible in luxury vehicles only. As a result, many self-driving cars have been relegated to MaaS (mobility as a service) fleets to generate recurring service-based revenues. Yet, “that’s really challenging to do,” and automakers “haven’t been very inventive here,” he says.

“That shift toward mobility services was never something that was [automaker]-led. That was something they felt pushed into, because if they didn’t do it, the business would suddenly go to competitors from outside the traditional automotive supply chain,” such as newcomers like Waymo. “They’re happy to take their foot off the gas, sticking to that passenger vehicle business model for as long as it’s viable,” Hodgson maintains.

ABI forecasts that SAE Level 3 and Level 4 luxury cars will be in the marketplace around 2023-24, with mid-priced vehicles following in 2026-27. It will be the end of this decade when mass-market vehicles are equipped with self-driving technology, and this will be a $1,000 option, Hodgson says. “It’s going to be an underwhelming start compared to what was originally fanfared,” he adds.

“It’s not helpful that Tesla vehicles keep running into things,” says Roger Lanctot, director of automotive connected mobility at Strategy Analytics in Newton, MA. “We’re still struggling with semi-autonomous operation, let alone full autonomous operation,” he adds. “The real issue is the lack of a regulatory regime to let these things run wild. This robotaxi proposition is probably further away than it seems. What’s missing is a business model. It’s a very expensive piece of equipment trying to replace human-driven taxis, which are delivering reasonably priced transportation in reasonable comfort and with reasonable reliability.”

However, the opposite could be true of self-driving commercial trucks, roboshuttles at places like airports, and specialty equipment (such as military vehicles), Lanctot states. Those could make business sense, he says.

Indeed, according to a March report by Lux Research, titled Automating the Last Mile, package deliveries by self-driving vehicles and autonomous drones will total 20 billion items annually by 2030, of a total 289 billion parcel deliveries that year. “Robot-as-a-service business models are emerging in startups developing last-mile automated delivery technologies,” says analyst Josh Kern in a release. “Large companies that can invest in and develop their own technologies are not expected to use these services, but logistics companies and retailers with no experience in robotics likely will.”

“Almost since their inception, self-driving cars have been hyped to a state of constant imminence. This led to wild expectations on the availability of the technology,” says Manuela Papadopol, CEO of Designated Driver, a startup based in Seattle, WA, that provides teleoperation of self-driving vehicles by remote human drivers. “It was overhype that led autonomy developers and [automakers] to claim full driverless capability in less than five years only to have the reality of this highly non-deterministic problem settle in. One by one, the automakers have taken back those Level 4 commitments.”

“There is a very long tail of exception cases in the development of an autonomy system, and humans (teleoperators) can handle these extremely well — self-driving technologies less well,” Papadopol explains.

To be sure, teleoperation can encompass more than remote driving by a human in real-world situations that challenge a vehicle’s built-in technology. “Teleoperation brings the human touch to an autonomous deployment by enabling a remote assistant to assure passengers and inspire confidence in a way that a robot simply cannot,” she suggests.

“From a purely computational perspective,” the delay in self-driving cars is rooted in false expectations surrounding AI and teaching autonomous systems to run completely independently without mistakes, which is ”a very significant problem” to solve, says Tal Cohen, the founding partner of DRIVE TLV, focused on smart mobility, based in Tel Aviv, Israel.

But this difficulty is exacerbated by the stumbling block of making self-driving vehicles work in a complex human transportation environment coupled with regulation and cooperation among regulators (such as smart city and transportation officials), technology creators and automakers. Thus, the first step in the evolution of self-driving cars is the application of autonomous tech in less complicated operations, such as for agriculture or mining. “Learning from these case studies,” he says, will lead to next-level strategies for deploying self-driving cars.

“From a purely computational perspective,” the delay in self-driving cars is rooted in false expectations surrounding AI and teaching autonomous systems to run completely independently without mistakes, which is “a very significant problem” to solve.
Tal Cohen DRIVE TLV

It’s possible the companies having the greatest success with self-driving cars are staying quiet to keep what they’re doing proprietary, says Bob McQueen, CEO of Bob McQueen and Associates in Orlando, FL. The firm advises the private sector and aids the public sector — departments of transportation in the U.S., Europe, Asia and the Middle East — with regard to connected and automated vehicles. According to the U.S. Department of Transportation there are roughly 1,400 autonomous vehicle field tests nationwide, but no sharing of the lessons learned, he says.

But McQueen says that driverless vehicle development may have reached the maturation level where technology sharing makes sense. “Why is there no equivalent of the FAA (Federal Aviation Administration) for technology applied to road transport?” he asks. “That might be an organizational answer to sharing data, moving the industry forward.”

Defining Leads to Understanding


Because cars can harm or kill, it’s important to be certain that a self-driving car won’t, and testing inefficiencies are limiting the ability to have absolute certainty, says Ziv Binyamini, CEO and co-founder of Foretellix Ltd., based in Ramat Gan, Israel. Predominant testing processes can deal with hundreds of situations while what’s needed is the ability to deal with tens of millions, he expounds. And a root cause of this problem is a paucity of common definitions or language to describe potential scenarios a self-driving car may encounter, so that “all stakeholders — from developers to testing engineers to regulators — can talk about the same thing,” Binyamini says. Once that’s achieved, the next step is setting metrics: defining the levels of safety expected of a vehicle in given circumstances.

Foretellix has developed a “measurable scenario description language” named MSDL for the industry to use and is contributing to the development of a new standard named OpenSCENARIO 2.0 by the Association for Standardization of Automation and Measuring Systems (ASAM). Binyamini says, “You need regulation to define what is safe enough” based on quantifoable verification. Foretellix also devised a method called “coverage driven verification” that defines a driving scenario (such as one car cutting in front of another) in a very high level way (specified once) and with software automatically generating many meaningful variations on it, including unexpected results. This capability can be built into a car, he adds, and what it finds together with real road data from driven test vehicles can be later aggregated in a central “safety dashboard” for vastly more efficient industry-wide learning.


“The industry su¤ ers from ambiguity in terminology,” and not just regarding self-driving car technology, says Heikki Laine, vice president of product and marketing at Cognata Ltd., based in Rehovot, Israel. The same applies to current advanced driver assistance systems (ADAS), Laine says.

Cognata makes a simulation platform and training data for automakers and their Tier 1 suppliers to develop autonomous driving and advanced driver assistance systems. “At the end of the day, the big challenge is around the diversity and the scale of exposure that the system has to highly accurate, highly realistic perception data.”

The principle is the same for an “automotive visual intelligence platform” developed by Cartica AI, based in Tel Aviv, which teaches vehicles to see the world through object “signatures,” says Karl-Thomas Neumann, a strategic advisor to the company. Just as people learn to distinguish a wine glass from a water glass but recognize the common elements of each, Cartica’s technology lets a car learn as it travels by picking out, for example, signature attributes of a traffic sign that it has never seen before or one that is damaged or mounted incorrectly. It gets better with experience and, although it’s now aimed at ADAS, it can be adapted to self-driving cars when needed, Neumann says.

Over the next five years, automakers will roll out in-vehicle platforms and architectures that can be updated frequently, as opposed to occasional over-the-air software updates, notes Artur Seidel, vice president for the Americas at Elektrobit, a global supplier of software for the automotive industry. “But the motivation can’t be ‘I do this to get my warranty costs down,’” Seidel says. “That’s the wrong mindset.” Rather, he says, those ongoing updates should be for refining a vehicle’s AI. Because AI performance is a consequence of the underlying training data, the industry needs to come up with processes for sharing that data within and between companies, he says. “We’re still one order of magnitude removed from understanding the larger scenes of situations,” the intention and associations of objects.

Seidel adds, future vehicle systems must be architected to account for changing sensor technologies. “One strength that the vehicle can have compared to a human being is, our sensors, as good as they are, are limited to our eyes and our ears — and the car can do better. While it will not overcome the shortcomings that may exist on the AI side,” he explains, “every car has to ship with significant headroom” in terms of hardware, to meet evolving software requirements over a decade’s time. “That’s actually the biggest shift now.”

Subscribe to i3 Magazine

I3, the flagship magazine from the Consumer Technology Association (CTA)®, focuses on innovation in technology, policy and business as well as the entrepreneurs, industry leaders and startups that grow the consumer technology industry. Subscriptions to i3 are available free to qualified participants in the consumer electronics industry.