Intel’s package is 50% smaller than today’s discrete GPU implementations
Today, Intel announced it is creating a custom chip with discrete Advanced Micro Devices graphics to make gaming laptops thinner and with better battery life. These same chips could also be used in very small and powerful gaming desktops. Most enthusiast gaming laptops with Intel’s Core H-series CPU are very performant, but also thicker (approximately 26mm), have lots of fans, and don’t get great gaming battery life. Through a deep technical integration with AMD Radeon Group, Intel wants to change this, driving those laptops down to 16mm and maybe even 11mm. This essentially creates a new enthusiast gaming segment that will sit below the highest performance gaming systems and above integrated graphics systems. So how do you essentially have your cake and eat it, too? It is not easy.
AMD first had to create a custom GPU using a second generation HBM (High Bandwidth Memory). While we do not know any of the details on the core architecture (Vega or Polaris), amount of memory, or what was “depopulated,” I believe performance would need to be between Ryzen Mobile graphics and the highest notebook discrete graphics parts. If not, there could be some positioning challenges. You can do some amazing things through integration, and I cannot wait to see the details.
Next, Intel needed to create a technology and a package that could provide as much performance as possible in a specific power range and make that package tiny enough to fit into those 16-11mm designs. Enter Intel’s EMIB or Embedded Multi-Die Interconnect Bridge. EMIB enables extremely fast and efficient throughput between different kinds of chips like CPUs and graphics.
EMIB also enables much smaller packages, vitally important when you are trying to reduce thickness by 10mm and improve battery life. Intel says it reduced the silicon footprint by 50% as EMIB and HMB2 enable stacking of the memory which you cannot do as well with GDDR5. This new Intel part is the first consumer part to take advantage of EMIB technology. Intel has Xeons that use this same tech to combine a datacenter class core with an Altera FPGA and will use EMIB with upcoming Nervana chips.
There are many details we do not know yet which will be required to do a complete assessment of this. It is not that I think Intel does not know, I think they want to keep NVIDIA guessing. First off, how custom and which architecture is the AMD Radeon chip? Were Intel and AMD able to depopulate certain parts of the custom graphics chip it did not need to save energy and die cost? In the end, what will matter in the market is how much price performance per watt Intel can get out of a 16mm gaming laptop compared with a thicker one. I highly doubt Intel would have announced anything without knowing with confidence what they are walking into.
We also don’t know the Advanced Micro Devices financial terms or length of this tie-up. Is this a trial, “one and done” or could this grow into engagements in, let’s say, HPC, where AMD has not had the resources to create a monster chip with big CPU and big GPU? AMD alluded years ago they were creating an HPC chip at one of their financial analyst days, but it fell off the roadmap. Given AMD’s prior semi-custom engagements, at a minimum, I am expecting some NRE revenue for the company.
So, if this is successful, what does this mean for both Intel and AMD? For Intel, they will have created a new segment of gaming and prosumer laptops with high-performance graphics. This would demonstrate their commitment to the PC space and give consumers another reason to upgrade and maybe even pay more for their laptops. This also allows Intel to participate in the revenue of the graphics capability, something the company does not do with today’s AMD Radeon or NVIDIA discrete GPUs. As NVIDIA dominates this high-end gaming laptop market today, it enables AMD to participate in this market and also likely collects NRE dollars.
I will admit, I almost fell off my chair when I heard about this AMD-Intel collaboration for obvious reasons, but then I reminded myself that AMD’s Radeon Group has always had a good relationship with Intel that goes all the way back to the ATI days, pre-AMD acquisition. In fact, I often hear murmurs of whom RTG has the better relationship with- the Intel Core group or AMD’s own CPU group. I know, weird. This alliance also reinforces whom Intel sees as the biggest threat right now, and that is NVIDIA, who is cleaning up in datacenter machine learning. The enemy of my bigger enemy is my friend, right?