Next week I will attend the annual international supercomputing event (now renamed the ISC High Performance Conference) in Frankfurt, Germany. This conference is the “tock” to the annual US-based “tick” Supercomputing event, which takes place every year after Thanksgiving and before Christmas. The European show is typically much smaller than its US cousin but affords attendees a close-up look into the vendors’ plans and the amazing science being conducted at global supercomputing centers and institutions. And it is always a good party, with some 3,000 attendees expected to make the trek to Frankfurt this year.
This will be the first ISC event to my knowledge where the keynote address is not about traditional High Performance supercomputing topics like simulation and modeling. This year, the keynote speaker is Andrew Ng, Chief Scientist at Baidu and associate professor at Stanford University. Andrew is a leading researcher in Artificial Intelligence (AI) and a high-profile advocate for the science of Machine Learning and Deep Neural Networks (DNN). This is a noteworthy departure from the norm, as the traditional High Performance Computing (HPC) community has a lot to gain from employing the techniques being researched and deployed by the DNN community and internet giants like Google, Amazon.com, Facebook, Microsoft and others.
In addition to some awesome brews and brats, here are some topics I hope to learn more about at the show.
- I expect we will see a status update on the upcoming Intel “Knights Landing” multi-core Xeon Phi, which is expected to ship later this year. In addition to speeds and feeds, I’d like to see how it will compare to NVIDIA GPUs, especially the new Pascal generation of boards that will begin shipping about the same time. I am especially keen to learn about any benchmarks the company can provide for Deep Learning and to hear about Intel’s plans to invest in the Deep Learning Ecosystem.
- It is also about time that we hear more from Intel on their plans for Altera FPGAs, especially as it relates to HPC and Deep Learning. Will it target training for Deep Learning, and if so, how will the company position FPGAs with respect to Xeon Phi?
- From NVIDIA, I will want to hear about the productization of the Pascal P100 chip in Tesla products and also about the company’s plans for the inference side of Deep Learning outside of the automotive and embedded space where they have already mastered the market with the DrivePX2 platform. Specifically, I’d like to hear how the company plans to compete with the Google Tensor Processor for cloud AI services.
- From the OEM vendors, Cray, Dell, Hewlett Packard Enterprise (HPE), IBM, Lenovo and SuperMicro, I would like to hear their plans for acceleration at scale. How will they enable multiple GPUs and FPGAs in their product lines? Will they incorporate NVLink from NVIDIA to enable GPU to GPU communication for highly scalable workloads? And what do they think about Intel’s decision to compete with them at the very high end of the HPC market, which they have now done in acting as the prime contractor for the big Aurora supercomputer at Argon National Labs, which will install the largest announced supercomputer in the world in 2018 based on the Intel Knights Landing and Knights Hill Xeon Phi products?
- What does Advanced Micro Devices (AMD) plan to offer the HPC and Deep Learning communities or is the company solely focusing its GPUs on gaming chips? The Hawaii chip is long in the tooth, and there has been no news since the Big APU was referenced at the financial analyst event in May 2015.
- From the big research communities, I will be looking for examples where the world of HPC takes advantage of machine learning to grapple with problems that don’t lend themselves well to procedural and rule-based programming methods. The combination of HPC and Deep Learning holds huge promise, and scientists are just beginning to think about the possibilities.
That should keep me busy!