Siggraph is the premier graphics conference in the world, where people from across the many parts of the 3-D graphics industry get together and talk about where the industry is headed and where it should go next. Sprinkle in a heavy dose of research and academic presentations, and you’ve got the formula for a typical Siggraph conference. Siggraph 2023 was held in the Los Angeles Convention Center, and, true to form, it addressed some of the most pressing issues within the industry.
In this post, I’m going to talk about the burgeoning OpenUSD graphics standard before diving into the many announcements from Nvidia that made it the biggest mover at the show. In a second post, I talk about the rest of the interesting things I saw at Siggraph 2023 before giving my overall thoughts on why this year’s Siggraph—commemorating the 50-year anniversary of the organization—didn’t quite live up to its usual standard.
OpenUSD and glTF
OpenUSD, also known as USD, is a framework for 3-D graphics data that was originally created by Pixar and then open-sourced in 2016. Since then, other companies including Adobe, Apple, Autodesk and Nvidia have adopted USD as the standard format for 3-D graphics files. These companies came together just before Siggraph 2023 to announce the Alliance for OpenUSD, which will help to manage and promote the continuation of the standard.
The Alliance for OpenUSD already has its first general members, including Cesium, Epic Games, Foundry, Ikea, SideFX and Unity. We are seeing the industry start to coalesce around USD as the standard for the metaverse and 3-D collaboration. Even companies like OTOY and Light Field Lab, which have created their own Immersive Technologies Media Format (ITMF), have found ways to be compatible with USD to maximize the utility of their platforms. This is possible thanks to the existence of the Metaverse Standards Forum, which has created a place for the different standards bodies and the companies involved with them to communicate and collaborate so they can harmonize their efforts and not replicate the same work
This brings us to the glTF (for GL Transmission Format) component of the equation, which I believe is equally important for the success of the metaverse, the spatial web, the spatial internet or whatever you choose to call it. glTF is a 3-D file format designed to be extremely lightweight and performant so it can run smoothly on virtually any device. There has been a lot of work done to enable USD to convert to glTF to minimize the loss of data or breakage of the final graphical object.
Forbes Daily: Get our best stories, exclusive reporting and essential analysis of the day’s news in your inbox every weekday.
What I hear from well-informed contacts in the industry is that USD will likely be the file format used to collaborate and share high-quality data, while glTF, true to its name, will be the transmission format for delivering a finished file to devices. To make this a reality, there are still many details to iron out—for instance, how do you handle physically based materials?—but OpenPBR (Open Physically Based Rendering) may help fix some of those problems down the road. It was extremely encouraging to see how many people attended the OpenUSD and glTF panel and presentations during Siggraph 2023. People seem curious, excited and happy to (potentially) have a common set of file formats that can finally unify the industry.
Nvidia’s latest advances in AI computing
Nvidia CEO Jensen Huang came back to Siggraph for the first time in 5 years and led off his company’s press conference there by announcing a new version of its Grace Hopper superchip, the GH200. Huang used his favorite, “Buy more, save more” line when talking about the various configurations of the Grace Hopper–powered DGX supercomputer for different AI applications. The new version of the GH200 superchip unites a Grace CPU with a Hopper GPU for higher performance. It features an industry-leading Grace CPU with 72 Neoverse V2 Arm cores, 500GB of LPDDR5x memory, 141GB of HBM3e on-chip memory with a whopping bandwidth of 5TB per second. A pair of these combination CPU/GPU superchips can be used together to create a single system with 144 CPU cores, 8 PFLOPS of Hopper GPU performance, 282GB of HBM3e and a memory bandwidth of 10 TB per second per node.
Huang said that using the new GH200-based DGX systems can help increase AI inference performance by up to 12X compared to an x86 system while also reducing power usage by 40% in an example based on a $100 million budget. For smaller-budget Grace Hopper-based DGX systems, he said, it would cost $8 million to achieve the same performance as a best-in-class x86 AI system—while using only 5% of the power.
Nvidia also announced that the DGX Cloud offering it announced earlier this year will power Hugging Face’s new AI training service. In addition to the Hugging Face partnership, Nvidia also announced the Nvidia AI Workbench, which brings together all of the capabilities of Nvidia’s AI platform but makes it easier to build and manage LLM models with simplified infrastructure setup and deployment. As if that wasn’t enough, Nvidia also announced its Nvidia AI Enterprise 4.0 AI platform, which enables data processing, training, inference and deployment across multiple platforms and offers complete end-to-end libraries for AI.
Nvidia’s big bet on the Omniverse
After Nvidia’s GTC 2023 event a few months ago, Moor Insights & Strategy CEO and chief analyst Patrick Moorhead offered some of his thoughts on Nvidia’s Omniverse platform, which he described as a “full-stack cloud environment for developing, deploying and managing industrial metaverse applications.” Based on the presentations at Siggraph, it’s clear to me that Nvidia sees the Omniverse platform as the connector between its work in generative AI training and generative AI inference. A key building block of this platform is the OpenUSD standard.
As mentioned earlier, OpenUSD was the brainchild of Pixar, which uses the standard internally to create many of the world’s most notable animated films. While there is no doubt that Pixar has put a lot of work into making OpenUSD a standard for anyone to use, it is also quite clear that Nvidia is aggressively contributing to the standard to make it as useful as possible for the Omniverse platform. The success of OpenUSD and the formation of the Alliance for OpenUSD is critical for Nvidia’s success in Omniverse because the benefits of Omniverse are only realized when more apps support and use OpenUSD. This in turn will drive demand for more Nvidia high-end GPUs like the RTX 6000 Ada (discussed below) or even the GH200. This is why we’ve seen Nvidia ratchet up its investment in OpenUSD over the last three years, and it shows no signs of letting up.
Other companies certainly see the benefits of working with the Omniverse platform. For instance, Shutterstock, through its acquisition of 3-D asset marketplace Turbosquid, is getting more involved with Omniverse by using Nvidia’s Picasso generative AI service to create 3-D content from text prompts. I spoke with some Shutterstock representatives at Siggraph, and it’s clear that Turbosquid also sees a world where OpenUSD becomes the standard for engineering, design and collaboration, while glTF becomes the standard for the final output that goes to users.
I think this will be the dynamic we will see evolve with time as WebXR starts to grow and we see more websites using 3-D assets created during the design and engineering phase that are also used as interactive objects inside websites, virtual storefronts and showrooms. WebXR is the standard by which most AR and VR browsers are built and is the cross-platform standard for sharing 3-D assets over the existing internet. At some point, WebXR will be as commonplace in the 3-D and XR space as WebGL is today on standard websites.
One of the most impressive things that I saw Nvidia announce at Siggraph 2023 was ChatUSD. What makes ChatUSD so interesting is that it combines Nvidia’s LLM capabilities with OpenUSD’s graphical capabilities so users can create things quickly and easily inside Omniverse. Anything that helps to accelerate the utilization of a platform like Omniverse, I believe, will be a key driver of growth for the 3-D market, whether that ends up being used in an XR headset or not.
Even more processors from Nvidia
In addition to the GH200, Nvidia also announced a few other new GPUs running on the Ada Lovelace architecture, including the L40S with 48GB of GDDR6 and the RTX 6000 GPU. These are effectively the same high-end professional graphics cards for graphics and AI applications, with the RTX 6000 intended for workstations while the L40S is for servers and rack-mounted systems. Nvidia announced a new generation of OVX servers based on the L40S that will support Nvidia Omniverse, Nvidia AI Enterprise 4.0 and Nvidia CUDA (Compute Unified Device Architecture) with up to eight GPUs per node and a claimed peak performance of 1.7x what an Nvidia A100 system would provide. Nvidia and its partners also announced a series of new RTX workstations that can use up to four of the new RTX 6000 Ada GPUs. Nvidia said that partners Boxx, Dell Technologies, Lenovo, Lambda and Z by HP have systems running these processors available immediately.
Nvidia also refreshed its entire line of RTX 5000, RTX 4500 and RTX 4000 GPUs with the new generation of the Ada Lovelace architecture. The new RTX 5000 Ada GPUs are also available now; however, the RTX 4500 Ada will be available in October and the RTX 4000 Ada in September. At Siggraph 2023, HP announced a new HP Z4 Rack G5 that features the latest RTX 6000 Ada GPU alongside an Intel Xeon W-2400 CPU, all while fitting into an extremely slim 1U form factor. It also offers optional HP Anywhere remote access with a secure connection that’s also low-latency. This enables the user to sit further away from the noisy workstation while it renders at full power. HP, like many of the other workstation builders, also refreshed many of its latest workstations with the latest RTX Ada cards, including its popular Z8 Fury G5 workstation with support for up to four RTX 6000 Ada GPUs. HP also hosted an extremely well-attended and popular creators stage where the company had all kinds of partners on stage to talk about their different areas of professional graphics expertise.
As if all of that wasn’t enough, Nvidia also had a booth in the emerging technologies pavilion where it collaborated with Looking Glass Co. to demonstrate Nvidia’s ability to quickly create 3-D images and video using AI in partnership with Looking Glass Blocks. The companies described the full demonstration as “an AI-mediated 3-D video conferencing system that can reconstruct and autostereoscopically display a life-sized talking head” on a Looking Glass display.
Wrapping up on Nvidia
Siggraph 2023 was without a doubt dominated by Nvidia’s news cycle and keynote, especially considering the company’s multitude of new GPUs and all the talk about OpenUSD and Omniverse. Nvidia was omnipresent at Siggraph 2023, between its DGX servers based on its new GH200 and its completely new RTX family based on the RTX architecture which powered so many new workstations.
Omniverse and USD each had a major impact on this year’s Siggraph, and it has become abundantly clear that the industry is finally beginning to coalesce around the USD format. Nvidia’s investment in both AI and open standards like OpenUSD will allow it to continue to be the dominant player in the professional graphics space, powering the many engineering, creative and entertainment industries represented at Siggraph.