AMD & Baidu
Advanced Micro Devices
celebrated another victory in the cloud market yesterday. Chinese Artificial Intelligence (AI) and search giant Baidu announced the immediate availability of AI, big data and cloud computing (ABC) services based on single-socket servers powered by EPYC. This announcement follows Microsoft
’s announcement last week that its new L-Series storage-optimized virtual machines would be powered by EPYC (see Patrick Moorhead’s coverage here
). Additionally, Hewlett Packard Enterprise
recently announced the availability of the ProLiant DL385 Gen10 server platform for virtualized infrastructure, powered by EPYC as well (also covered
by Patrick Moorhead).
It was not entirely surprising that AMD
secured Baidu as a customer—Baidu publicly announced support for EPYC at AMD
’s launch event back in June. Still, this announcement demonstrates progress in AMD
’s quest to compete with Intel
in the largest hyperscale datacenters.
While specifics around which EPYC system-on-a-chip (SoC) will be utilized have not been announced, Baidu’s use of single-socket provides a telling hint. The EPYC 7501 SoC is a 32-core part that has a variant specifically for single-socket use. This 180-Watt SoC delivers the following horsepower:
- 32 multi-threaded cores (64 threads)
- 2TB DDR4 RAM
- 8 Memory channels
- 128 lanes of PCIe3
180W may seem like a large power envelope for a processor, but when considering the workloads being supported, this deployment could be a case study in datacenter efficiency. AI, big data, and cloud computing services all thrive with larger memory footprints and more memory bandwidth. Additionally, workloads such as AI require GPU assistance for optimal performance. The EPYC 7501 provides the largest memory capacity, memory bandwidth, and PCIe of any x86 processor in the market, by a long shot. Baidu can deploy single-socket servers with the memory, storage, and GPUs to support the demands presented by AI, big data, and cloud services. Servers based on other CPUs would require two sockets to support these workloads, and that second CPU could run upwards of $3000 per server. Tack on a few hundred dollars more, when considering the cost of materials for a second socket in the server and additional power consumption. Multiply that by tens of thousands of servers, and the cost effectiveness becomes very real.
What this means
Anybody who watched AMD
’s launch of EPYC walked away understanding the importance of cloud to AMD
. This focus is reasonable as “cloud” will make up about 50% of server deployments over the next few years. The Baidu announcement demonstrates two things to me. First and foremost, AMD
is successfully executing what appears to be a “cloud first” strategy. Six months after launch, and two of the largest cloud providers have deployed EPYC. Even though these customers undoubtedly tested pre-production silicon, this is an impressive achievement. My hunch tells me there will be more announcements.
Secondly, this announcement shows that cloud providers understand the value of the EPYC SoC. The datacenter architects at Microsoft
Azure, Baidu, and elsewhere have the most intimate understanding of CPU performance: integer performance, performance per watt, performance per dollar, and performance per watt per dollar. When the team at Baidu selects EPYC for use in its datacenter, you can rest assured there is a price performance advantage at play. Enterprise IT organizations take note—cloud providers are validating EPYC.