In my 2021 coverage of NVIDIA's GTC, I said that I hoped to be in person again at GTC 2022, but it was another year of virtual attendance. Two weeks ago, NVIDIA held its annual GTC Event, and the company made fourteen new product, service, or customer announcements.
I have covered some of NVIDIA's GTC 2022 on Twitter per my conversation with CEO Jensen Huang and in the Six Five Podcast with Futurum Research’s Daniel Newman. In this coverage, I want to mainly cover the automotive announcements. Let's jump right in.
NVIDIA Drive Hyperion 9
The DRIVE Hyperion 9 Platform is NVIDIA's next-generation software-defined autonomous platform for automated and autonomous vehicles. NVIDIA Drive Hyperion 9 is said it improve in compute power and sensor redundancy. For autonomous platforms, compute power and sensor redundancy go hand-in-hand.
The more computing power of a system, the better and faster an autonomous vehicle is able to make its driving decision. NVIDIA says DRIVE Hyperion 9 will have double the performance of the current DRIVE Orin-based architecture in the same power envelope. The DRIVE Atlan SoC leverages NVIDIA's Grace GPU architecture, Arm CPU cores, and deep learning and computer vision accelerators which are important to the redundancy of the platform. In my coverage of the last-generation Hyperion 8 platform, NVIDIA left some headroom for AI computing within the DRIVE Atlan SoC for Level 4 autonomous driving. I believe I was right on that, and with double the performance over the last generation, it is more than capable of level 4 autonomous driving.
NVIDIA calls the DRIVE Hyperion the vehicle's “nervous system” and DRIVE Orin as the brain. It sometimes blows my mind to think that the brain is a critical nervous system component, but accurate. I think NVIDIA and every big player in the autonomous vehicle space should see its respective AV platform as what it is replacing, the brain and the senses. While I sometimes think I have double the brain performance of some drivers on the road, it is difficult to imagine that NVIDIA has managed to double the performance of its DRIVE Hyperion 9 platform. It is to NVIDIA's advantage to have a full vertical stack within its AI, automotive, robotics, safety, and Datacenters technologies. That is the name of the game in this space right now. It’s no longer to throw a bag of parts over the wall like NXP does and call it a day. Platforms matter.
I have said before that the more redundancy of a system, the easier it is for the brains of these platforms to make a decision. To go back to the nervous system analogy, just as the brain is important for the driver in processing the sensory data to make a decision, having two eyes, a vestibular (balance) system, and even a sense of touch. If Spiderman is driving, wouldn't he be a better driver than the average person with his Spidey senses? Joking aside, the more cameras and sensors of a vehicle, the more data to process and the more data to process, the better a vehicle's decision. NVIDIA says DRIVE Atlan can process more sensor data, including imaging radar, enhanced cameras with higher frame rates, two additional side lidar, and improved undercarriage sensing with a better camera and ultrasonic placement. That brings the total sensing count to 14 cameras, nine radars, three LIdars, 20 ultrasonics, three in-cabin cameras, and one radar for interior occupant sensing. That is a lot of sensors, and the ability of the Atlan SoC to process all that data and make it a safe and calculated driving decision is a feat of its own.
NVIDIA DRIVE Map
NVIDIA announced a multimodal mapping platform for Level 3 and Level 4 autonomous driving. Mapping is another important component of autonomous driving, considering AV platforms need a specific level of infrastructure and presuppositions for the road. An AV cannot follow the rules of the road if it does not already know the rules of the road, and having a map with layers of data lays the foundation for that data.
NVIDIA says DRIVE Map contains multiple localization layers of data for use with the camera, radar, and lidar modalities and says the AI driver can localize to each layer of the map independently. Each layer is useful for different information; the camera layer is best for seeing lane dividers, road markings, traffic lights, signs, and poles; the radar layer is useful in poor road conditions and at night; the lidar is useful for building a 3-dimensional representation of the world at a 5cm resolution.
This type of mapping data tacks on an extra layer of redundancy that I did not even think about and is something that, like human drivers, we do without thinking. Imagine that time you were driving to work (back when everyone drove to work), and the road on your main route was under construction. You were aware that your main route adds extra minutes to your commute from a previous time driving on that route, so you decided to take a different route. In the same way, NVIDIA DRIVE Map provides extra presuppositional information to the AI driver that is then updated and enhanced with the vehicle's own redundancy. The example I gave from the human experience was within the boundaries of which route to take, but I believe this redundancy is most applicable to the vehicle's safety. I see it as an opportunity for the AI driver to make driving decisions with the map information it already has to then be updated and changed if need be by new data, making the decision-making process more efficient and safer.
NVIDIA says DRIVE Map is built with two engines—a ground truth survey map engine consisting of survey vehicles and a crowdsourced map engine consisting of potentially millions of passenger vehicles. The ground truth engine is based on the DeepMap survey engine and should lay the groundwork for DRIVE Map with centimeter-level accuracy, while the crowdsourced map engine offers the scalability and updates needed to expand the map.
Not only is DRIVE Map useful for AV on the road, but also for AV deployment as an Earth-scale digital twin built within Omniverse. The Drive Map data is stored and loaded within Omniverse, where automated content generation tools create a drivable simulation. This simulation environment opens developers to test and create scenarios within an accurate digital world for testing and validation before putting it into the real world. I believe NVIDIA's digital twin could become a powerful tool for developers and carmakers in ensuring the safety of a vehicle to the public.
NVIDIA also announced partnerships with BYD and Lucid Group to adopt NVIDIA DRIVE for their next-generation EV fleets. BYD is the world's second-largest EV maker in China. EV is becoming more and more popular every year, and with many automakers making EV climate pledges, it is disrupting the industry. EV and AV go almost hand-in-hand, and I believe NVIDIA's DRIVE Hyperion platform and a tremendous asset to automakers transitioning towards AV.
Lucid Group launched its first vehicle this past year, winning Motor Trend’s 2022 Car of the Year. Lucid Groups plans on using NVIDIA DRIVE as the center of its DreamDrive Pro platform. Although Lucid Group is one of the new players in the automotive space, I believe its testimony as a newcomer EV-only model further solidifies the position of NVIDIA's DRIVE platform as a leader in AV for EV.
Alongside these partnerships, NVIDIA announced the start of production of its NVIDIA Drive Orin AV computer. DRIVE Orin has been adopted by over 25 vehicle makers across a slate of EVs, robotaxis, shuttles and trucks. NVIDIA also says its total automotive design win pipeline as increased from $8 billion to $11 billion over the next six years. I know very well how much time, effort, and R&D NVIDIA has put in its automotive plays and these results are big wins for NVIDIA. It shows that NVIDIA’s automotive plays are the real deal.
The NVIDIA Drive Hyperion 9 platform looks very promising, considering NVIDIA was able to double the performance of the last generation while increasing the processing of sensory data. It is important to understand that these platforms are replacing the most powerful computer ever made—the human brain. Although I do not believe man can outdo God in creating the most powerful computer in the world, I believe in the future, these platforms will replace human drivers with a redundant number of sensors and a map.
NVIDIA DRIVE Map looks promising, and I believe it could play an important role in the safety and efficiency of AVs. NVIDIA digital twin could also become a powerful tool for developers and carmakers in ensuring the safety of a vehicle to the public before ever driving off the car lot. Seeing major players like BYD and Lucid adopting these platforms gets me excited about the next couple of years as we see fully autonomous electric vehicles hitting the road within the next few years.
Note: Moor Insights & Strategy co-op Jacob Freyman contributed to this article.