The annual Supercomputing event took place in Dallas last week, with over 13,000 attendees and miles of isles winding the convention center floor. The big news this year was that the USA retook the top 2 spots from China on the list of the 500 largest computers. Both of these systems are powered by IBM Power9 and NVIDIA GPUs, connected by Mellanox InfiniBand. The larger Summit system at Oak Ridge National Labs (ORNL) topped the list at 143.5 trillion 64-bit floating-point operations per second (Teraflops). Despite the USA’s lead in the top 10, China extended its wider leadership position with 227 (45%) of the 500 fastest computers in the world.However, for me, the most impressive accomplishment was Sierra’s performance of 2.3 Exaflops (2300 Teraflops), achieved by using the 16-bit TensorCores on the GPUs. TensorCores were originally developed by NVIDIA to build and run deep neural networks for AI, but when you give scientists the capability to perform a matrix multiply in a single clock cycle, as TensorCores offer, smart people get really creative really fast. A team of scientists at ORNL put the TensorCores to use in scientific work, with a program for comparative genomic analysis. By calculating with the smaller 16-bit numbers, the team achieved an insane 10,000-fold increase in performance compared to using a CPU—without a loss of precision! This project was a finalist for the prestigious Gordon Bell annual award, along with four others that utilized Summit. I expect more scientists will use this technique to achieve better performance and efficiency on select scientific codes in the future. This gives NVIDIA users a new tool to advance their science and build artificial intelligence. NVIDIA also introduced new containers in its NVIDIA GPU Cloud software repository, including tools that use the new RAPIDS software to accelerate traditional Machine Learning workloads. By applying GPUs to many more workloads, these offerings support the company’s HPC vision of combining Scale-up with Scale-out computing. Figure 2: Jenson Huang, NVIDIA's founder and CEO, takes the stage to a packed audience at SC18. KARL FREUND Elsewhere on the show floor, IBM touted its progress in Quantum Computing with a flashy booth where one could marvel at the device’s amazing cooling technology. In the photo below, you can see the layers of cooling discs. Each lowers the temperature of the coolant by 10-fold, finally resulting in a temperature near absolute zero—colder than outer space. Lenovo , Dell, and Hewlett Packard Enterprise were also on site, touting their respective HPC systems and wins, with each claiming leadership in AI and HPC. As I have written before, applying AI (deep learning) to AI is becoming the new normal. It is being used to estimate the outcome of analyses based on a vast repository of data from traditional simulations and achieving one to three orders of magnitude speedup. HPE took it one step further, showing off a mockup of its supercomputer in space (though I doubt anyone at the show ordered a second one). More importantly, HPE provided the lone ARM-based Supercomputer on the top 500 list, located at Sandia National Labs, and based on Cavium ’s Thunder X2 ARM CPU. Lenovo showed off its cooling technology and pointed out that over 50% of its supercomputing sales were to installations outside of China (since Lenovo used to be called IBM, this should not be much of a surprise). Dell, on the other hand, tends to provide smaller HPC systems, emphasizing HPC in the Enterprise market where its Ready Solutions and services stand out. Conclusions
As always, the annual Supercomputing event was chock full of computer systems and scientists eager to show off their wares and achievements. If you have never been to one, I highly recommend you attend next year’s event in Denver, Colorado. Alternately, if you are in Europe, or just prefer German beers, I recommend attending the international version of the conference in Frankfurt, June 16-20. I have been attending for decades, and will surely be there.
Have a great Thanksgiving!