What did Intel announce?
The new strategy is named “OpenVINO,” which stands for Open Visual Inferencing and Neural Network Optimization. One can give Intel a pass for dropping an “N,” but in my world, VINO means WINE, without any ambiguity. I would have loved to be present at the meeting where Intel decided on the branding and would have suggested something like OpenVIA, for “Open Visual Inference Acceleration.” Far more importantly though, I think Intel’s strategy of providing common software development interfaces for multiple silicon implementations makes a great deal of sense. Today I will attempt to explain why I think the company is well-positioned for growth as a result.
Intel has at times been criticized for throwing too many darts at the AI wall, with Intel Xeon Phi, custom ASICs (from Nervana, Movidius, and Mobileye), x86 CPUs, and Altera FPGAs. However, if one looks at the wide range of environments in which smart applications will be deployed, one sees specific data types, latency tolerances, performance requirements, and power envelopes unique to each. We are still in the early innings of the AI revolution, where researchers are learning how to build useful and valuable applications. This work places enormous computational demands on datacenters to train and refine the required deep neural networks (DNNs). Over time though, I expect the inference market—where those trained neural networks will be used in mass production environments—will become significantly larger, albeit at lower prices and perhaps gross margins. Therefore, I don’t believe edge intelligence will be a winner-takes-all situation like we have seen in the DNN training market, where NVIDIA may already have an unassailable lead (at least for the short term while competition is sparse). Edge intelligence, on the other hand, is a market whose tide will lift many boats. Intel seems to have a boat in every slip for processing DNNs and vision-specific subsystems.
Back on the branding point, the use of “Open” implies that other tech companies are or will be included in the program. Intel did promise to offer open source implementations in the future. It’s important to note that NVIDIA has already open sourced its hardware and software for image processing, now even partnering with ARM, with its Deep Learning Accelerator. This promise by Intel may be a recognition that some of the very low-level hardware for Convolutional Neural Networks used in vision processing may commoditize, but I’d be surprised if Intel decides to open-source any hardware specs.
If Intel is going to be successful here, it needs to provide a unified software strategy that allows developers to build its apps on common interfaces, and then deploy those apps to the ideal processing platform for deployment. This is exactly what the new OpenVINO Toolkit intends to accomplish.
OpenVINO provides common software tools and the optimization libraries needed to deliver on the write-once, deploy everywhere vision that the company believes will be attractive to developers and system implementation teams across a broad front of industries and applications. Supporting both the popular DNN frameworks such as TensorFlow, MxNet, and CAFFE, as well as OpenCV and direct (hand) coding solutions, the OpenVINO toolkit may realize this aggressive vision—at least for the Intel CPU, Movidius, and Altera platforms. Intel seems to be off to a good start and has already won some impressive early customers: GE Medical for clinical diagnostics, Honeywell for industrial automation solutions, and Agent Vi for visual analytics.
ConclusionsThis move by Intel will help unify its diverse offerings of CPUs, GPUs, VPUs (Movidius), and FPGAs (Altera) for the rapidly growing market for vision processing at the edge. It will be interesting to see if this incorporates Nervana and discrete GPUs in the future. The acquisitions of MobileEye for autonomous vehicles and Nervana for Data Center AI training should complete Intel’s broad portfolio for AI processing. If Intel can now stay the course and execute on its strategy, I believe it will become a formidable competitor in the larger market for inference processing. Of course, it almost goes without saying that the Intel Xeon family remains the industry platform of choice for AI workloads (“classical machine learning”) that do not need accelerators. NVIDIA has become a juggernaut for chips and systems used to train neural networks for AI applications, growing 72% last quarter to reach a $3B forward-looking run rate in its datacenter business unit. However, the growth of inference processing will likely eventually dwarf that market, and Intel has a wealth of technology to bring to bear. I plan to attend the Intel AI Developers Conference in San Francisco this week and will provide my thoughts as the conference unfolds. Hopefully, Intel will share some information on the status of the Nervana program, which may be essential for its datacenter acceleration strategy. I’m looking forward to learning more!