A few weeks ago, we covered ARM’s announcement that it would be delivering a suite of AI hardware IP for Deep Learning, called Project Trillium. ARM announced at the time that third party IP could be integrated with the Trillium platform, and now ARM and NVIDIA have teamed up to do just that.
Specifically the two companies will integrate NVIDIA’s IP for the acceleration of Convolutional Neural Networks (CNNs), the bread and butter for image processing and visually guided systems such as vehicles and drones. Without a lot of fanfare, NVIDIA’s Deep Learning Accelerator (NVDLA) was open-sourced last fall, providing free Intellectual Property (IP) licensing to anyone wanting to build a chip that uses CNNs for inference applications (inference, for those unfamiliar, is the processing of a trained neural network). The crying sound you’re now hearing around the world is probably a bunch of well-funded startups and their investors who thought that a dozen guys in a garage could out-engineer NVIDIA when it came to CNN accelerator chips.
Figure 1: NVIDIA CEO Jensen Huang explains the company’s portfolio of AI technologies at GTC.
What did NVIDIA announce?
Jensen Huang foresees that there will be millions of smart chips needed for edge processing—especially in the IOT arena—that can make use of NVDLA. Integrating NVDLA with the ARM Trillium may improve NVDLA’s market position for fast CNN chips, powering applications such as smart cameras, smart sensors on the factory floor, and smart low-cost drones. Note that NVIDIA uses NVDLA in its own Xavier SOC in their Pegasus self-driving vehicle platform.
Why would NVIDIA give away such valuable tech? Because Jensen knows that his Tesla family products, which can deliver 125 trillion operations per second, currently own the market for training those neural networks. This market has in part propelled the NVIDIA Data Center business to a run rate of ~$2B of very profitable revenue a year, growing by 2-3X per year. If inference chips for CNNs are based on the free NVDLA hardware and NVIDIA TensorRT software, it gives NVIDIA a ready market for its high-end training chips. Jensen wants to keep NVIDIA engineers focused on solving the largest data problems, and he must believe that processing images is either not that hard or profitable going forward.
When one considers that over 20 startups around the world are building chips to accelerate inference and/or training, NVIDIA’s free NVDLA strategy starts to look pretty smart: commoditizing CNN acceleration technology at the edge will make it difficult for NVIDIA’s potential competitors to capture that high volume inference market to fund their operations. Now these startups will have to compete with NVIDIA where Jensen’s company is at its best: training and more challenging inferencing
While I wouldn’t say that this move is game-over for all the startups building AI acceleration chips, I would suggest that anyone wanting to build a dedicated CNN processor now has their work fully cut out for them. They will need to add some special sauce that NVIDIA hasn’t thought of (and good luck with that), or look to build a more general accelerator that can compete with NVIDIA GPUs and their ecosystem. Meanwhile, NVIDIA’s NVDLA strategy is looking pretty solid, going from “NV-What?” to the likely leader in the blink of an eye.