Musk Confirms Massive Compute Required For AV, But Is Developing Own Processor The Right Direction?

By Patrick Moorhead - August 3, 2018

Earlier this week, on Tesla’s earnings call, Tesla's Elon Musk “outed” what had been previously rumored -- that the company is working on its custom, self-driving car chips. Custom silicon has been the rage as of late, as we have seen Apple, Google and Facebook jump into the deep end. While only Apple has proven any of this makes sense given risk and investment, it appears Telsa is also going to test the model. But unlike Apple, Google and Facebook, messing up a chip doesn’t endanger anyone’s lives or put the company at risk.

Elon and chip designer Peter Bannon made some very interesting statements on performance and efficiency, but without a lot of details. One of the most interesting things said was that the new chips have 10X the performance of "other chips." If, like Google, Tesla is comparing this to the older Nvidia platform, in this case, Drive PX2 which is three years old, this is less impressive. Teardowns have shown that Tesla only built Autopilot V2 using half the chips on a Drive PX 2. Tesla’s claims this week aren’t very impressive as Nvidia's next-generation solution, Drive Xavier, is more than 10X the performance of the prior design SoC, and its Drive Pegasus delivers another order of magnitude of performance -- a whopping 320 trillion operations per second. This could mean Tesla is taking a very big risk for very little, if any, gain. Another provocative statement made was on SoC to GPU performance. Musk said, “the transfer between the GPU and the CPU ends up being one of the constraints on the system.” This is exactly what Nvidia is doing with NVLink, which pumps data at  20 GBps between the two processors. Again, as Tesla didn’t release any detailed information, it’s hard to make a direct comparison. Without those details, like Google's TPU, it appears Tesla designed its silicon without knowing what Nvidia was designing. I have to assume that Tesla's new chip is similar to Google's TPU, that it's an ASIC, meaning it is not hardware programmable like an FPGA nor flexible like a GPU. There are pros and cons with ASICs. The pro is that if you know exactly what you want to run for the next ten years, they are more efficient. Even LIDAR systems use FPGAs as they have been needed to be reprogrammed to keep up with safety standards and, quite frankly, fix mistakes. At this point, I have to assume that Musk's new solution will have less programmability, meaning it has to be "right" out of the gate. Like other ASICs, there will likely be a software abstraction layer which, while reducing performance, does enable different kinds of software to be run as long as it is very, very similar to the prior software and frameworks. This could mean that if Tesla needs to update its software, it could take longer to get out or could be limited in how it could fix any safety issues down the line. On the other hand, Nvidia's Xavier has been designed from the ground up as a processor for autonomous machines. The SoC integrates six different types of processors for processing different types of data and running computer vision, deep learning, mapping and path planning algorithms. The silver lining here is that Tesla finally realizes self-driving capabilities takes a massive amount of compute. Remember when V1 was released? Tesla claimed those systems would work well but it was soon clear that a smart camera system did not have the capability to scale from driver assistance systems to true automated driving. While others are approaching AV technology from the low end and slowly inching up, Nvidia starts with supercomputing in the datacenter, and makes an automotive grade, functionally safe and energy efficient. Two years ago, Autopilot V2 was released and we were supposed to already see a coast to coast unassisted drive. That hasn’t materialized yet. I suppose V3 will fix these issues, but I believe it could create new problems. All of this “not invented here” approach by Musk is to be expected, and sometimes admired, but I hope lives aren’t endangered in this latest science project. Solutions already exist from Nvidia that appear to deliver the supercomputing performance required with a safety architecture that has already been assessed by the experts and are backward compatible with Tesla’s software stacks. The good news is that Tesla and the industry have recognized that supercomputing is the key to delivering safe autonomous vehicles.
Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.