International Trade Today is a service of Warren Communications News.
‘Strong Science Fiction’

MIT Researcher Sees Prolonged ‘Mixed-Use Environment’ for Autonomous Cars

There “absolutely” will come a time when L5 autonomous vehicles will be “out there” on global roadways, Bryan Reimer, research scientist at MIT’s AgeLab, told CE Week’s Connected Mobility Conference in New York Wednesday. But L5 -- short for Level 5, Department of Transportation and industry jargon for cars that will have full self-driving automation (see 1604080037) -- “is not going to be everywhere instantaneously,” said Reimer, also associate director of the New England University Transportation Center at MIT.

Sign up for a free preview to unlock the rest of this article

If your job depends on informed compliance, you need International Trade Today. Delivered every business day and available any time online, only International Trade Today helps you stay current on the increasingly complex international trade regulatory environment.

For the “foreseeable future,” and perhaps for decades, “we are looking at a very mixed-use environment” in autonomous cars, composed of “manually controlled” vehicles, but also “semi-automated” and fully automated cars, mainly in large metropolitan areas, Reimer said. “Anybody who has a good grasp on what that mix is going to look like is really writing strong science fiction today,” he said. “I don’t think anybody really knows.”

The “data deluge” that will accompany the rise of autonomous vehicles will “vastly increase safety and, concurrently, dramatically increase efficiency,” said Peter Esser, general representative-Washington operations, for NXP Semiconductors, which has big ambitions to play in the autonomous vehicle components space. “What we’ve already done in the skies” for the data that flows from air traffic control “we’re in some sense replicating on the ground” for building the data infrastructure to support self-driving cars, said Esser. “That’s revolutionary.”

Reimer agreed aviation safety is an analogous “case study” for the development of an autonomous vehicle society. He recalled that in the late 1980s, the aviation industry took the “tack” that pilot error “was the source of major system problems,” he said. “It was a system-level thinking that really transformed the aviation community from where it was to where it is. It’s taking that systems approach that’s saying, ‘Look, it’s the relationship between the human, the technology and the policies governing that, that’s the key to success.’ We’re not there yet in the automated ground vehicle fleet. We’re still focused on the technology problem.”

Human behavior may be “hardest to change” on autonomous vehicle adoption, Reimer said. For example, developing “an appropriate level of trust, without an over-trust” of self-driving cars, will be “incredibly difficult,” he said. “It takes a lot of time to develop trust. It takes fractions of seconds to erode that.”

The “good thing” for autonomous vehicle technology is that “we will have the ability to phase it in,” said Esser. “We already see things on the road like adaptive cruise control and advanced driver systems that really provide what you need to carve out those distractions that otherwise lead to fender-benders,” he said. “It’s there. Of course, it’s only in a small percentage of the vehicles and it’s not mandated currently. But ultimately as the fleet refreshes itself -- and however you characterize that horizon, it could be as long as 20 years -- we will see it phased in.” How long it will take for consumers to “achieve that necessary comfort zone” toward autonomous vehicles is one of the “key questions” looming over society, Esser said. What can the industry do to “overcome those lapses in trust” in wake of a horrific accident is another key question, he said.

Challenges will abound in building autonomous vehicle consumer trust because humans “have set a baseline that we accept human error,” Reimer said. “But we don’t have a baseline to accept machine error. We don’t like drones dropping bombs in foreign lands. We’re not comfortable with that.” Likewise, within the autonomous vehicle infrastructure, “we’re not going to be terribly comfortable with a programmer making a decision somewhere in the world through advanced AI [artificial intelligence] that they don’t understand, making a decision to go right and hit a pedestrian as opposed to go left and hit a telephone pole,” he said. “Those are decisions a programmer’s going to make. Right now, we make those decisions today and we fault human error. They’re difficult ethical dilemmas that we have not yet fully, wholeheartedly thought through how we’re going to solve them.”