Note: This article was originally published in Forbes.

As many of the most innovative companies in the world race to bring autonomous vehicle solutions to market, a fierce debate has emerged in the industry about the best way to build those solutions. The debate centers on the proper role for deep learning in vehicle automation.

An AV must make countless tactical choices moment to moment to navigate through its environment, choices that are second nature to experienced human drivers: how fast to go, whether to stop at a traffic signal, whether to slow to let another vehicle merge, whether to change lanes to avoid a parked car.

These are highly safety-critical decisions. They can mean the difference between life and death on the road, millions of times over, every day. Given the stakes, it is no surprise that the question as to which technological approach to apply here has taken on huge importance and inspired vigorous debate.

One camp of companies believes that decision making can be handled best with deep learning methodologies.

In such a system, AVs do not make decisions based on rules that humans have explicitly programmed into them (for instance, “stop at red lights”). Rather, the machines are simply fed massive amounts of data depicting a wide range of different driving scenarios; as they consume and identify patterns in that data, they gradually “infer” the proper way to drive on their own.

This approach works exceptionally well most of the time. Many of the companies at the cutting edge of autonomous technology -- Drive.ai, AImotive, FiveAI -- are purported to be pursuing this route.

There is one major drawback, however: using deep learning in this way means that it can be impossible to understand why the machine acted the way it did.

This is because deep-learning neural networks are a “black box”: they consist of millions of connections between nodes that are fine-tuned in opaque, subtle ways as data is fed in. When a deep-learning network produces an output (e.g., the decision to stop or not to stop at a yellow light), that output cannot be traced to a particular sequence in the AV’s software; rather, it is an emergent outcome of the overall system.

Experts call this problem “lack of interpretability.” As well as deep learning networks may perform at driving 99.9% of the time, this lack of interpretability becomes a real concern on those rare occasions when an AV makes the wrong decision and causes an accident. In those situations, humans have no way to explain what went wrong and no way to troubleshoot the error.

Using deep learning in AV decision making, then, entails ceding control and even understanding to the machine. Not everyone thinks this tradeoff is worth it.

Many industry players, in light of these concerns, reject the use of deep learning in AV decision making as dangerous and unwise. Instead, they advocate for systems based on old-fashioned logic and rules.

In a rules-based approach, a set of concrete decision guidelines is explicitly programmed into the AV in code that humans can read and understand, rather than being learned by the machine through experience. After all--proponents of this approach argue--the rules of the road are finite and pre-defined.

NuTonomy, a leading Boston-based AV startup, has adopted this approach.

“What you want is to be able to go back and say, ‘Did our car do the right thing in that situation, and if it didn’t, why didn’t it make the right decision?’”said nuTonomy COO Doug Parker. “With formal logic, it’s very easy.”

Ford has similarly chosen not to use deep learning in AV decision making.

“Deep learning is well-suited to learning to classify objects,” said Ford executive Jim McBride. “If you want to do pedestrian detection, deep learning is better than conventional algorithms. On the other hand, it’s not suited to everything. The topography of roads and rules of the road are defined by preset conditions so I’m not a proponent of deep learning to handle that.”

A rules-based approach has its own drawbacks, however. What it gains over deep learning methods in interpretability, it sacrifices in flexibility and nuance: it is difficult to provide rules for every possible scenario a car might encounter.

As Gill Pratt, head of Toyota’s autonomous driving program, cautions, “it is challenging to apply formal methods to a heterogeneous environment of human-driven and autonomous cars.”

For now, the proper role for deep learning in AV decision-making remains open for debate. As Jing Wang, Baidu’s top self-driving car executive, admitted when asked about the topic: “Nobody knows the final answer, so everyone has their own opinion…it’s very confusing right now.”


Rob Toews headshot color.jpg

Author: Rob Toews
JD/MBA Candidate, 2018, Harvard University
Co-Founder, SHFFT

LinkedIn | Email