Intel Lays Out Their Comprehensive AI Strategy At Their First “AI Day”

I attended Intel’s event today on Intel AI (artificial intelligence) in San Francisco and wanted to share with you my quick observations:

AI compute background

There are many flavors of AI: neural networks, LSTM, Belief Networks, etc. Neural networks for AI is currently split between two distinct workloads, training and inference.

Generally speaking, training takes much more compute performance and uses more power, and inference (formerly known as scoring) is the opposite. Also, generally speaking, leading edge training compute is dominated by NVIDIA GPUs, legacy training compute by CPUs and inference compute is divided across the Intel CPU, Xilinx/Altera FPGA, NVIDIA GPU, ASICs like Google TPU and even DSPs.

Top-line thoughts

Intel threw their formal AI strategy axe into the ocean which was very important given the general tech industry sees GPUs as the current driver of AI compute. If Intel can execute on and deliver what they said they would do today, Intel will be a future player in deep neural networks AI. To be clear, Intel is part of almost every AI implementation today as you can’t boot a GPU and in non-DNN AI, but in leading-edge neural network installations, GPUs are doing most of the heavy-lifting for deep neural net training.

I believe Intel today said what needed to be said and if you look at how quickly they have pulled together Altera, Nervana, Phi, Xeon and all the required software work, it’s impressive for such a large company. It’s now up to Intel to flawlessly execute. It’s important to note that companies like NVIDIA, IBM, Xilinx, AMD, and Google with TPU won’t be standing still.

Intel Nervana Platform and Portfolio

Intel acquired Nervana for an estimated $400M in August. This announcement today on the Nervana Platform is the company’s first dedicated acceleration platform for artificial intelligence and also takes Nervana technology and integrates it into Xeon. Intel said they were integrating Nervana silicon technology into Xeons that customers could deploy in late 2017, which is pretty aggressive. The Nervana Portfolio also included FPGAs, but it wasn’t clear how Nervana technology was integrated.

The 100X goal (better training than GPU) is an incredibly aggressive one and Intel is putting everyone publicly in the industry on notice that they will be successful in AI training, today dominated where the newest workloads are driven by NVIDIA.

Google Cloud Alliance

This is a good move as this could accelerate AI in the enterprise as most of the heat is in the public cloud or PaaS cloud solutions. This strikes me as a competitor to what IBM announced last week for the enterprise, called “PowerAI” which leverages OpenPOWER processors and NVIDIA GPUs. It will be very interesting to see how this plays out in the enterprise. Not many details were provided but I will provide more insights as details are available.

As more details arise, AI analyst Karl Freund or I will provide more details.