Computing power required for fully autonomous vehicles is mind boggling
Like it or not, we are on the road to an autonomous vehicle future. According to some predictions, autonomous vehicles (AVs) — with no need or provision for drivers — will be the norm within 20 years.
But the further we venture down that road, the more daunting the task becomes. So much so that the goal will, quite literally, need superhuman assistance to achieve. “Superhuman” as in “Artificial Intelligence” that is.
Artificial Intelligence (abbreviated as AI) is loosely defined as the ability of a computer-controlled device to perform intellectual tasks characteristic of humans, including the abilities to learn and reason, to perceive and understand its environment and, using those abilities, to take actions that maximize its chance of successfully achieving its goals.
In other words, we’re heading for cars that can think for themselves.
Systems already developed and in production, such as Cadillac’s Super Cruise, Mercedes Benz’s Drive Pilot, and a few others, already offer significant autonomous capabilities within defined conditions.
But the challenges of eliminating those restrictive conditions — to enable the vehicles to go anywhere, any time, on any road, in any weather — seem to get bigger the closer we get to achieving them.
According to NVIDIA, a corporate leader in AI technology for AVs, the computing demands of a totally driverless vehicle are from 50 to 100 times greater than those for the most advanced cars available today.
Jen-Hsun Huang, co-founder, president and CEO of NVIDIA, was a keynote speaker at CES 2018. He called the goal of autonomous driving the greatest computing challenge of its kind. “The number of challenges necessary to solve… is utterly daunting,” he said. “Autonomous vehicles represent a level of complexity the world has never known…and because lives are at stake, its decisions must always be the right ones, made using software no-one has ever known how to write.”
To put some perspective on that complexity, consider all the conscious and sub-conscious observations and decisions you make, as a human driver, on even the shortest routine drive in familiar territory and favourable weather.
Now consider the task of programming a machine to make the same drive, even with input from a full cadre of sensors, including cameras, Lidar, three-dimensional mapping and even direct communication from other vehicles — assuming they all work to perfection.
We learn to adapt to changing weather conditions, varied road surfaces, erratic traffic patterns, detours and road closures, and even to different driving cultures.
Then, consider that task encompassing all the possible conditions, on every road, everywhere! To call it mind-boggling would be gross understatement.
Simply put, using traditional computer algorithms that work on a “decision-tree” basis, it is next to impossible, even using something like NVIDIA’s Xavier processor, which is capable of 30-trillion operations per second. The sheer number of computations and permutations involved must approach infinity.
“There’s no way that computer vision using fixed algorithms (performing fixed tasks) can handle the diversity of things that happen on the road,” says Danny Shapiro, NVIDIA’s Senior Director of Automotive.
But we as individual drivers do it every day. We do it by first learning to drive; then practising and gaining experience until the task becomes less daunting. Based on that experience, we learn to anticipate rather than just react; and we extrapolate from past occurrences to predict those about to happen and take proactive action.
We learn to adapt to changing weather conditions, varied road surfaces, erratic traffic patterns, detours and road closures, and even to different driving cultures.
That’s the stage AI development for AVs is now in. And it has the huge advantage of being able to pool all the information and experience gathered from all its resources, not just those we experience as individuals. In addition, it can be tested and tutored by literally millions of real and potential scenarios via simulation.
Inevitably, the result will be cars that not only think for themselves but do so at a level far superior to anything an individual human mind can achieve. Which in itself raises some concerns. Is there a point at which we lose control of these “superior minds?”
Fanciful as it sounds, it’s a question that has been raised about AI in general by such visionary thinkers as the late Steven Hawking. Proponents of the technology say that if AI is trained only to drive, that’s all it can do. It cannot modify itself, nor reprogram itself and the idea of a Terminator-like machine rebellion is pure science fiction.
Still, we’ll leave the last word to Hawking: “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble!”