High-end GPUs are all the rage right now. They are driving deep neural net training for artificial intelligence, enabling the growing PC gaming market and next-generation virtual reality, augmented reality and mixed reality applications. There are only two high-end players in the GPU space, Advanced Micro Devices 's Radeon group and NVIDIA. Over the past few years, AMD has dominated the gaming console market and the lower end graphics market and shared the mid-range, while NVIDIA currently owns the datacenter and professional graphics, GPU DNN-training and the highest-end gaming graphics. AMD has gained unit graphics share the past few quarters.
While there are many factors that go into which vendor does what, architecture has historically been a determining factor in big swings of market share in one direction or another. This is what makes AMD’s Vega architecture so interesting as it could determine AMD's place in graphics for the next 5 years. AMD has been riding different variations of the same GCN (Graphics Core Next) architecture since 2011 and has been making improvements to it, but Vega brings an entirely new architecture to the table.
Why a new architecture?
Advanced Micro Devices's Radeon group has designed the Vega architecture to attack future workloads spanning workstation, compute and gaming. They cite that current architectures can’t tackle these future workloads and cite that game install sizes are rising, professional graphics data density and compute workloads are going into the petabytes, and that there’s a growing discrepancy between compute power and memory capacity. All this is true,
and are you seeing a logical trend here around memory? If not, you should.
Architecture is important, but in the end, what matters is performance per watt per density workload based on Vega, and we won’t know those final number for a while.
Memory scalability, scalability, and more scalability
Advanced Micro Devices is calling Vega ‘the world’s most advanced GPU memory architecture’. It’s Interesting they lead with memory, not compute, right? I don’t think this indicates they are planning on weak compute, but memory scalability with a new hierarchy is something that could clearly differentiate them. I will get to compute later.
Vega includes a high-bandwidth cache with a 512 TB virtual address that connects different speed memory systems across NVRAM, system DRAM and networked storage. HBM2 delivers 2x the bandwidth per pin and double the capacity compared to HBM1 as well as 8x capacity/stack which just adds to Vega’s scalability capabilities.
New geometry pipeline
In Vega, Advanced Micro Devices is implementing a new geometry pipeline which AMD says deliver more than 2x the peak throughput per clock versus previous architectures. Part of what enables this is AMD’s implementation of a primitive shader that runs parallel to the vertex and geometry shaders.
They have also improved load balancing allowing for more distributed tasks to be processed through the geometry, compute and pixel engines. Each of these engines is responsible for a different portion of what needs to be processed by the graphics chip to display a rendered image.
New compute unit
In addition to a new geometry pipeline, AMD has implemented a new compute unit which they’re simply dubbing the NCU for Next-generation Compute Unit, replacing GCN’s Graphics Core Next. Compute units are what’s responsible for doing the fundamental mathematical calculations to process all the different types of functions.
One of the more interesting characteristics about the NCU is that its double precision rate is configurable. AMD is saying that they can do 128, 32-bit operations per clock, 256 16-bit operations per clock and 512 8-bit operations per clock. What I’m most interested in better understanding is what happens to performance between 32, 16 and 8-bit operations. I'll hunt that down later as its really important from an efficiency. For example, an this is true in every piece of silicon, if you throw 32 bits of die area to crunch 8 bits of data, that's not as efficient as 8 bits of die area crunching 8 bits of data.
New pixel engine
Advanced Micro Devices also implemented a new pixel engine in Vega, which continues the work AMD has been doing for years in trying to reduce memory footprint. The new pixel engine utilizes things like a draw stream binning rasterizer which is once again designed to improve performance and save power. It does this by doing things like shading only once and culling pixels that are not visible to the user.
They’ve also directly connected the pixel engine along with the compute and geometry engines directly into L2 cache, which they didn’t do before. This is especially helpful when using deferred shading where performance gains can be recognized by connecting to faster cache instead of a memory controller.
All this techno-babble is important and fun, but what does it ultimately mean for Advanced Micro Devices and the Radeon graphics competitiveness and business? My big takeaway is that AMD rode GCN for five years, are deriving a maximum of $249 in the consumer space for a card based on GCN, and are moving to a new architecture with Vega, which theoretically give them access to markets yielding consumer maximums of around $699, even higher in workstations and the datacenter.
With Vega, AMD has architecturally changed nearly everything, the memory architecture, geometry pipeline, compute unit and pixel engine. As I said before, architecture improvements are great and what matters in the end is how the individual Vega-based products perform per watt and density across different workloads, how much they can price it at and how much that costs AMD.
Vega signifies AMD’s best opportunity in graphics to drive share and profits and I’m looking forward to the first Vega products to test.