In a battle that has become somewhat predictable, but fun to watch, NVIDIA
have recently announced new technologies and acquisitions, respectively, to compete in the fast growing market for Deep Learning in Artificial Intelligence (AI). These announcements demonstrate that both companies are doubling down on their strategies: NVIDIA intends to win with a portfolio of hardware and software based on a common GPU architecture, while Intel intends to compete with CPUs (Xeon and Xeon Phi) beefed up with application-specific integrated circuits (ASICs).
Over the last few months, practically every major technology CEO has declared that Artificial Intelligence will be “The Next Big Thing”. The technologies that will power everything from “precision agriculture” to self-driving vehicles all have one thing in common: they demand an outrageous amount of compute power (literally billions of trillions of operations) to “train” the neural networks that create them. So it is not surprising that NVIDIA and Intel have taken the gloves off in the battle for this lucrative and fast growing market. To illustrate that growth, NVIDIA recently announced that their Datacenter business grew at an eye-popping 110% Y-o-Y
in the latest quarter to $151M, accounting for roughly 10% of NVIDIA’s total revenues.
Intel brings out some big guns
Clearly, after Intel missed the transition to mobile, or “The Last Big Thing”, they do not intend miss out on “The Next Big Thing”. Intel entered the fray just last June when they launched the Knights Landing
many-core Xeon Phi, where they tried to pin NVIDIA to the mat with impressive-sounding benchmarks for AI workloads. However, NVIDIA subsequently responded
with a litany of corrections, claiming that NVIDIA, not Intel, wins these contests if the benchmarks are properly configured.
Then, in what appeared to some as an abrupt about-face, Intel announced that they are acquiring Nervana Systems
, an AI startup that is developing an ASIC that accelerates the training of neural networks. While still in development, the company claims that the planned Nervana Engine accelerator will outperform a GPU (read NVIDIA) by 10x. Nervana also brings some impressive software to the table that delivers a 3x performance boost over the equivalent NVIDIA software. It remains to be seen how Intel will integrate this technology into their business, but the potential for disruption is certainly there.
Intel’s Diane Bryant introduces Slater Victoroff, CEO of Indico. Sporting some well-worn sandals, Mr. Victoroff delighted Ms. Bryant by noting that his Deep Learning applications are not well suited to GPUs, and explained why his company prefers Intel Xeon Phi. (Source: Intel)
The following week, Diane Bryant’s IDF keynote featured two AI pioneers on stage to tout the benefits of using Xeon Phi for AI. First up was startup CEO Slater Victoroff of Indico, who specializes in making text and image analysis easier for enterprise applications. The second AI speaker was Jing Wang, senior vice president of engineering at Baidu
, a leader in voice processing and natural language translation technology. Wang was enthusiastic about using Xeon Phi for Deep Learning in his shop, which is a really big deal since Baidu also works very closely with NVIDIA.
Ms. Bryant then announced that the next generation Xeon Phi, called Knights Mill, would target AI and would support variable precision math, a key feature that enables NVIDIA’s new Pascal chips to essentially double their AI performance for free.
Finally, Intel announced last week that they will acquire Movidius, a Silicon Valley company that has been delivering computer vision silicon accelerators for many years. Movidius is well positioned in the vision market, such as in drones, and could help Intel further their ambitions in the automated driving market and in the Internet of Things (IoT). So, in the span of just 3 months, Intel has gone from zero AI products to an impressive portfolio that can accelerate both training and vision inference applications.
But this is still NVIDIA’s house
Not to be outdone, NVIDIA has continued to roll out new products based on their Pascal generation 14nm architecture, which now includes 5 data center products for Deep Learning: the Pascal P100 with NVLink and 2 Tesla P100 PCI-e Cards targeting Deep Neural Network training, and the newly announced Tesla P4 and P40 PCI-e accelerators for cost-effective AI inference, especially for cloud applications, where the trained network is used to make decisions. These newest chips support 8-bit integer math that delivers an astounding 22 and 47 trillion operations per second, respectively, and are optimized with the new software NVIDIA announced for inference (TensorRT) and real-time video analytics (DeepStream). Analyzing streaming video to identify content attributes is an example of a computationally demanding inference job that is way beyond what an Intel Xeon can tackle in real time. This is where the new Teslas shine: the company claims that a single Tesla P4 card running DeepStream can perform as well as 15 Intel Xeon E5 dual socket servers and should be popular with providers of public cloud infrastructure.
NVIDIA’s new Tesla P4 accelerator targets the inference side of Deep Learning, where the volume in cloud computing infrastructure is likely to be large. (Source: NVIDIA)
In spite of Intel’s attempts to steal the limelight, NVIDIA is riding high on the momentum they have built in AI, and the future looks bright. Recent research published by Narrative Science
suggests that 62% of enterprises, which have been slow to adopt AI to date, plan to deploy AI applications in their businesses by 2018. If this adoption rate takes hold, and I believe it can, we have barely begun to see how these new technologies will transform businesses and the world around us, and both Intel and NVIDIA are positioning themselves to take advantage.