This week, I tuned into NVIDIA’s annual GPU Technology Conference, or GTC, albeit virtually due to the pandemic. The processor powerhouse has a long history of category-defining innovation, dating back to its launch of the first 3D GPU in 1999. This week’s keynote was chock full of news and innovation, including another all-new processor category designed to muscle NVIDIA into the datacenter market. Let’s take a look at the new processor and several other announcements from GTC 2020.
Look out, data center market
One of the most significant announcements was the unveiling of the DPU (data processing unit)—a whole new category of processors designed to offload networking, security, and storage tasks from CPUs in the data center. Essentially, you can think of these DPUs as smart NICs. Under the new family moniker BlueField, NVIDIA unveiled the first two of these accelerators: the BlueField-2 and the BlueField-2X.
BlueField-2 brings together an array of fully programmable Arm cores, a ConnectX-6 Dx network adapter (derived from NVIDIA’s acquisition of Mellanox) and an NVME-optimized Controller to deliver up to 200 GB/s performance for both traditional and modern workloads. The DPU includes hardware offloads for workloads such as software-defined storage, zero-trust security, networking and management. By offloading these functions from the CPU, NVIDIA says a single BlueField-2 can perform the same services that can require as many as 125 CPU cores. Just think about what those freed-up CPU cores could accomplish.
This claim isn’t “crazy-talk.” I look at what the teams is doing at AWS with “Nitro” and offload and virtualization is very much a reality at the cloud giant. NVIDIA can offer this capability to every cloud giant and enterprise OEM who does not have an in-house silicon team.
The second DPU, BlueField-2X, includes the same key attributes as BlueField-2 but bolsters them with AI via an NVIDIA Ampere GPU. Leveraging Ampere’s 3rd gen Tensor Cores, BlueField-2X can perform real-time, AI-driven security analytics functions detailed in the graphic above.
Complimenting the BlueField DPUs is the DOCA SDK, shorthand for “data-center-infrastructure-on-a-chip architecture.” Arguably the most impactful announcement, this open platform enables developers to design cloud-native, software-defined, infrastructure services accelerated by the new DPUs. You can think of it as analogous to the programming tack NVIDIA took with CUDA and GPU-accelerated applications. Additionally, DOCA integrates NVIDIA NGC, a containerized software environment that NVIDIA says will allow 3rd party developers to use DPU-accelerated services to build, certify and deploy their apps to customers.
Straight out of the gate, NVIDIA announced numerous global server manufacturers who have plans to employ NVIDIA DPUs into their offerings as soon as 2021. These include ASUS, Atos, Dell, Fujitsu, Gigabyte, H3C, Inspur, Lenovo, Quanta/QCT and Supermicro. BlueField-2 is currently sampling, while BlueField-2X is projected for availability next year.
In the realm of graphics, NVIDIA’s long-time bread and butter, the company announced NVIDIA Maxine, a cloud-AI video streaming platform. Maxine is essentially a toolkit designed to give service providers a host of audio, video and conversational AI features to enhance video communication. These features include background noise removal, facial alignment, real-time translation, context-aware closed captions and more. But perhaps the most significant attribute is its employment of GANs (generative adversarial network) to enable high-resolution video conferencing, even over low bandwidth connections.
Anyone who’s tried to make a video call on a less-than-optimal bandwidth knows how frustrating it is to have their call break up and freeze mid-conversation. It’s an all too familiar story in this era of increased remote work. Maxine replaces video codecs (the traditional method of video compression and decompression) with GANs, an approach which NVIDIA says allows for placing calls at one-tenth of the network bandwidth traditionally necessary for smooth transmission. Video calls may never be the same—I’ll be the first in line to try it.
In other graphics-related news, NVIDIA announced the open beta deployment of its Omniverse platform, which it calls “the world’s first NVIDIA RTX-based 3D simulation and collaboration platform.” To paint a picture, Omniverse is a bit like the Star Trek holodeck—a photorealistic, real-time simulation that melds the physical and virtual worlds. Though such technology has wide-reaching implications and potential, NVIDIA envisions it as a tool to bring remote teams together to work collaboratively on projects in areas such as design, engineering and animation.
Omniverse is made possible by Pixar’s Universal Scene Description format, the gold standard for universal interchange between 3D applications. Meanwhile, NVIDIA provides the technological muscle for “real-time photorealistic rendering, physics, materials and interactive workflows between industry-leading 3D software products.” I believe Omniverse has potential, and I’m excited to hear more about it as it moves through beta.
Targeting tomorrow’s AI developers
NVIDIA also unveiled an entry-level developer kit for its Jetson Ai at the Edge open platform. The Jetson Nano 2GB Developer Kit seeks to encourage the next generation of AI and robotics enthusiasts through various hands-on projects, online trainings and AI-certification programs. Additionally, users will have access to the Jetson developer community’s wealth of resources, including open-source projects and instructional materials.
Though it targets students, hobbyists, and educators, Nano isn’t just kid stuff—it contains the same NVIDIA CUDA-X accelerated computing stack that the company leverages in advanced applications such as self-driving cars, healthcare, IoT, smart cities and more. At the incredible price of $59, Jetson Nano looks to be a great gateway into the Jetson AI at the Edge ecosystem. There’s plenty of room on the Jetson platform to move upwards for those who advance beyond tinkering with entry-level AI devices—all the way up to fully autonomous machines. NVIDIA is democratizing IoT AI for everybody.
Accelerating healthcare research
The last area of news I’ll touch on is healthcare—of particular importance in this current global moment. NVIDIA lifted the curtain on a new supercomputer it calls “Cambridge-1,” which will be the most powerful system in the UK when it goes online (expected at the end of this year). Pharmaceutical companies and medical researchers will be able to leverage Cambridge-1’s 400 petaflops of AI performance and eight petaflops of Linpack performance for urgent medical research (yes, that includes Covid-19). Researchers from GSK, AstraZeneca, Guy’s and St. Thomas’ NHS Foundation Trust, King’s College London and Oxford Nanopore Technologies have announced they will leverage the system (and I would expect more to come). Upon completion, the NVIDIA DGX SuperPOD system will rank 29th on the TOP500 list of the most powerful supercomputers, and in the top three most energy-efficient systems on the Green500.
NVIDIA also announced a partnership with GSK, a global healthcare company, and its London-based lab for AI-based medicine and vaccine discovery. By marrying GSK’s wealth of genetic and genomic data with advanced computing platforms and AI techniques, the lab hopes to unlock biomedical data's potential, at scale, for “transformational” medicines and vaccines. NVIDIA has compute power and expertise that could prove indispensable to these efforts. GSK’s AI lab will employ NVIDIA’s DGX A100 Systems (and NVIDIA data scientists) and will have access to the Cambridge-1 supercomputer, as mentioned above. With the whole world laser-focused on the search for the elusive Covid-19 vaccine, this partnership comes as welcome news.
My takeaway here is that NVIDIA is delivering on its promise it made with the Arm acquisition and working with the right partners to extend its medical research.
From the DPU to the DOCA SDK, NVIDIA’s networking and offload play for the datacenter looks more promising from here. If there’s one thing we know about NVIDIA, it’s that it does not pull its punches—competitors beware. I believe the novel approach of offloading storage, security, and networking functions onto a high-powered Smart NIC will free up crucial CPU space and serve NVIDIA well, especially if its claims of a 10x performance improvement bear out. DOCA is critical and given the company’s CUDA-success, it knows this, too.
Beyond the datacenter news, it is encouraging to see NVIDIA apply its compute power to relevant, timely causes such as medical research and video conferencing and democratize IoT AI. I’m happy with what I heard from NVIDIA at GTC 2020—here’s to hoping we’ll be able to attend the next one in person.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.