NVIDIA GTC: NVIDIA Bets Big On Deep Learning

By Patrick Moorhead - March 23, 2015
NVIDIA knows how to swing for the fences, and they did just that at this year’s GTC (GPU Technology Conference). At GTC 2015, NVIDIA CEO Jen Hsun Huang announced a multitude of new products, all of them having a very strong focus on Deep Learning. Deep Learning is a term coined by the industry which combines the broad field of machine learning with the application of deep neural networks. In fact, all three of NVIDIA’s major keynotes were all about deep learning in one way or another, in addition to the four new product announcements focused on Deep Learning. This event, which has always traditionally been more technical and research heavy has always had a decent amount of graphics technology and graphics talks, but this year was incredibly heavy on the Deep Learning. Why is NVIDIA betting on Deep Learning? NVIDIA has a lot of different business segments, their mobile business is mostly being driven by their automotive business which both share similar if not the same SoCs. However, NVIDIA has been challenged to get their mobile SoCs into big volumes of smartphones, which is really where the volumes are in the mobile SoC business. They can win a few high-end tablets where gaming is really appreciated, including perhaps a design of their own, but that will unlikely bring profitability to their mobile SoC business. Thankfully, their automotive business appears to be taking off and they keep getting more and more significant design wins that continue to give them momentum in automotive. If nothing else, they have the thought-leader crown in automotive. In addition to their relatively small size of the mobile SoC business, NVIDIA already has more than 75% share in the discrete GPU market and has even more share in professional discrete graphics. This has been the case for years, and NVIDIA is using Deep Learning as a way to expand the professional market in which they want to win in the future. NVIDIA stated at GTC (GPU Technology Conference) that in the past 7 years, there has been a 10x growth in GPU computing with more than 3 million CUDA downloads, 319 applications, 800 universities teaching, 60,000 academic papers and over 450,000 Tesla GPUs. All of that comes out to 54 Petaflops (54,000 Teraflops) of GPU compute power compared to just 77 Teraflops in 2008. They want to continue to grow the overall size of the professional graphics market and encourage those new buyers of GPUs to buy NVIDIA graphics cards. That leads us into NVIDIA’s four major announcements.
NVIDIA CEO Jen-Hsun Huang talks about GPU Compute Growth  (Credit: Anshel Sag)NVIDIA CEO Jen-Hsun Huang talks about GPU Compute Growth (Credit: Anshel Sag)
What bets is NVIDIA making in Deep Learning? NVIDIA’s bet on Deep Learning is a fairly big one, but it likely won’t manifest itself in the short term as much of Deep Learning still requires lots of post-graduate and graduate-level research at universities which can eventually trickle into the private sector. We are already starting to see the beginnings of that with Google and Baidu, but for it to become a large industry, there will need to be more people working on Deep Learning problems than there are now. NVIDIA is looking to kick start that industry and to be a major player within the industry if not win through the promotion of their own GPUs and coding languages for their GPUs. NVIDIA’s first announcement was the announcement of the “Titan X”, which is the company’s fastest GPU to date with 7 Teraflops of theoretical single precision compute capability and a whopping 12GB of memory.
NVIDIA CEO Jen-Hsun Huang talks about Titan X (Credit: Anshel Sag)NVIDIA CEO Jen-Hsun Huang talks about Titan X (Credit: Anshel Sag)
Currently, their Teslas are running on an older Kepler architecture, which leaves those looking for double precision to look at older architectures until the next Supercomputing in November. Some people would argue that this card doesn’t particularly live up to the Titan name due to having only 200 Gigaflops of double precision compute capability. This is compared to 1,707 Gigaflops in the previous generation Titan Black while still selling at the same $999 price. NVIDIA says that this is because the Maxwell GPU used on this card was designed for gaming and single precision computing and this works well for Deep Learning since it uses single precision. For customers who require full double precision compute capability, NVIDIA offers the Tesla line which is their product line dedicated to High Performance Computing. NVIDIA also has a development machine that they built for Deep Learning development called the “Digits Devbox” which uses four of NVIDIA’s Titan X GPUs to deliver a whopping 28 Teraflops of compute capability. The purpose for this box, NVIDIA says, is to pre-configure the machines and all of the software so that researchers working in Deep Learning can immediately get to work rather than spend time on building and configuring their machines. NVIDIA says that they made this decision based on feedback from their existing community of researchers and they wanted to make the process easier, this devbox sells for a cool $15,000 (each of the four GPUs inside of it costs $1000).
NVIDIA CEO Jen-Hsun Huang talks about Digits Devbox (Credit: Anshel Sag)NVIDIA CEO Jen-Hsun Huang talks about Digits Devbox (Credit: Anshel Sag)
NVIDIA’s CEO Jen Hsun Huang also gave more details on the company’s GPU roadmap with their Pascal architecture which is expected in 2016. It has three new features which are designed to boost performance and deliver an estimated increase of up to 10x in Deep Learning performance over Maxwell. This is being promised through a combination of mixed precision compute capability, the addition of 3D memory and a new, high speed graphics interconnect technology called NVLINK which they introduced last year.
NVIDIA CEO Jen-Hsun Huang talks about Pascal (Credit: Anshel Sag)NVIDIA CEO Jen-Hsun Huang talks about Pascal (Credit: Anshel Sag)
Last but not least was their announcement of the official launch of their “Drive PX” system for automotive self-driving. Like Pascal, NVIDIA had already announced Drive PX at CES (Consumer Electronics Show), but gave more detail at GTC with pricing of $10,000 and availability in May. The Drive PX can feature up to two Tegra X1 mobile SoCs, but NVIDIA clarified that this is just a development platform and that different automotive platforms will vary broadly in terms of size and cost per unit. They didn’t elaborate much on who exactly is using Drive PX, but they did talk again about the Deep Learning aspects of the autonomous driving system that they are developing with Drive PX.
NVIDIA CEO Jen-Hsun Huang talks about Drive PX (Credit: Anshel Sag)NVIDIA CEO Jen-Hsun Huang talks about Drive PX (Credit: Anshel Sag) Will it pay off? NVIDIA is making major bets on Deep Learning as their future driver for GPU computing growth. There is little chance we will see much of an impact from Deep Learning on NVIDIA’s growth in the near term, and that’s OK. Think about it like this- if Deep Learning is one of the “next big things”, and it will most likely be, NVIDIA is positioning itself as a leader in it. NVIDIA’s short term performance will still be driven by their current businesses which include gaming graphics, professional graphics and GPU compute where they continue to excel. NVIDIA already showed us at GDC that they are looking to GRID to provide them a new source of income from cloud gaming and consoles. This will most likely drive mid-term growth, providing a bridge from today to a Deep Learning future. NVIDIA is still quite committed to gaming and they are simply diversifying their reach in their current markets to new products and services.
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.