Nvidia’s Logan Could Be A Mobile Graphics Disruptor

By Aaron Johnson - July 24, 2013

Today at Siggraph, the world’s largest graphics show, Nvidia provided more details on their next generation mobile graphics capabilities inside of Logan, the follow-on to Tegra 4. Logan’s graphics are based on Kepler, the graphics used in Nvidia’s PC, workstation and cloud solutions, which, given its good performance per watt, has the potential to be a disruptor in the mobile space.

Nvidia is showing off Logan’s graphics capabilities in a couple of videos here and here that help show off its performance and features. The videos of the demos, created by Nvidia, are impressive by any measurement based on my 20 years evaluating graphics.  They are taking advantage of advanced features like tessellation, global illumination, lots of post processing and raw compute performance.  From an advanced standards point of view, Logan excels, supporting OpenGL ES 3.0, OpenGL 4.4, and DX11.  These levels are very hard to achieve, particularly in mobile.  There is no definitive word yet on OpenCL or RenderScript support, but I would it find it hard to imagine if they didn’t.

Compared to Apple’s iPad 4, Nvidia showed benchmarks from its Logan development board with orders of magnitude higher performance.  It’s not the iPad 5, but that doesn’t matter as its nearly 5x more.  On power, Nvidia is measuring graphics power on their development board at nearly 3X lower than the iPad 4 running graphics benchmarks.  This bodes well for Logan, but keep in mind the tests were run on a development board with early Logan silicon. The ultimate tests is performance per watt and the overall mobile experience in a branded tablet or phone by third party benchmarkers.  Nvidia plans to make this  happen in 1H/14.

I’ll admit, architecturally, I was initially skeptical that Nvidia could pull this off. You see, historically, mobile and workstation graphics architectures were very different.  Scaling a mobile GPU to do what a workstation has never worked before, and scaling workstation graphics to mobile never worked either. Nvidia had at least five generations of mobile GPU to figure this out, and what they did was make each one of their graphics units with 192 shader cores self-sufficient, putting each feature, like tessellation, in each graphics unit.  This means you can use less graphics units and still keep the features, but lower power.  I’m sure Nvidia has had to do some memory tricks to support all of the graphics data, but I don’t think Nvidia is ready to talk about that yet. Net-net, the architecture is impressive.

So why does this matter to the smartphone and tablet market?  Any time someone provides an order of magnitude improvement, let’s say a 5X improvement, it can’t be ignored.  As a phone or tablet OEM or ODM, you must pay attention or potentially get left behind.  This puts Nvidia in a good position right now but it’s early and we don’t know exactly what’s on the horizon from Qualcomm, Imagination Technologies, ARM, Intel, or Vivante.

For Nvidia to translate this into broad business success, they need to deliver on-time the benchmarked performance and power in a branded phone or tablet and deliver on a wide range of wireless options around the world.  This isn’t easy.  They also need to get mobile ISVs to enable their games and GPU compute enabled apps to take advantage of the features and architecture, which is something they have done extremely well on the PC side.

Mobile graphics is one of the most competitive technology markets and Nvidia just raised the bar.

Website| + posts