A live volumetric video stream of myself
One of the greatest challenges for the immersive computing industry, encapsulating XR, the metaverse and all other types of spatial media, is content. Plain and simple, content drives user engagement and revenue which in turn grows the market and attracts new users. One of the most promising ways of generating content for both VR and AR leverages volumetric video scans of real people for a multitude of applications. For the longest time, I’ve been tracking many different companies in this space, including 8i, Microsoft and Tetavi. Many of these scans start out quite high-resolution but eventually deliver a sub-par image quality that I believe breaks the immersion. One of the most promising competitors in this space is San Diego-based HypeVR, whom I’ve written about before. This week HypeVR announced an exciting new breakthrough in volumetric video technology.
The high fidelity live streaming breakthrough
HypeVR’s new solution feels like a quantum leap from where the company was only two years ago. Back then, HypeVR powered its live real-time point cloud streaming solution with an array of Intel RealSense cameras (all of which are now at EOL). While it was able to stream over 5G, the image quality and voxels were not as good as their offline processed captures.
HypeVR’s pedigree is in filmmaking. Their new volumetric live streaming solution uses high-quality cameras like RED’s Komodo to capture extremely high-fidelity volumetric video for the purpose of creating life-like photorealistic experiences. The entire solution is camera agnostic and utilizes off-the-shelf cameras—it can even use higher-resolution cameras like RED’s V-Raptor to capture 8K video. This is a huge differentiator from HypeVR’s competitors, all of whom rely on IOI, RealSense, Kinect or other proprietary camera solutions. HypeVR, on the other hand, can utilize any camera with an SDI feed, genlock and timecode sync (though the company’s preferred solution is pairs of the aforementioned RED Komodo cameras).
HypeVR’s solution is also capable of outward 360 volumetric capture, which captures more than just a limited stage. Not only is HypeVR’s solution camera agnostic and cinema-grade, but it is also live real time. This means that processing time is a thing of the past—these scans can be done instantly, with offline captures saved as backups. Speaking of offline captures, all the processing by HypeVR is done on commodity computers. This means that no special FPGAs or processing cards are necessary to run this real time solution. In fact, while HypeVR currently uses Nvidia’s GPUs to accelerate its live real time streaming solution and take advantage of the AI tensor cores, it is not beholden to NVIDIA—it could work with Intel in the future if it so chooses thanks to Intel XMX cores. HypeVR’s real time mesh compression codec also enables streaming over 5G networks, requiring only 50 Mbps to deliver photorealistic volumetric video at 24 FPS. This codec currently achieves a 20-30x compression rate in real time and the company’s CEO Tonaci Tran tells me they are targeting 25 Mbps in the coming year—a rate which would make it compatible with almost any internet connection. HypeVR already has six patents granted (and two pending) in holographic capture, holographic compression, holographic video delivery and holographic workflow.
Seeing is believing
HypeVR recently gave me a demonstration of its new volumetric scan capabilities, scanning me with three pairs of 180-degree cameras. While I have seen all kinds of volumetric video captures in my day, going back to systems like the 8i in the early days of AR and VR, I’ve never seen anything close to what I saw in HypeVR’s demonstration. The solution looks so good, that without seeing it in person and witnessing yourself in high-fidelity volumetric video, you’d think it was not done live. It looks even better than most offline captured volumetric video content that I have seen, yet this is done live in real time and gets processed in milliseconds. Since the scan happens live in real-time, you can record someone getting scanned while the scan processes in AR, VR or any medium you want. Not only that, but I got to experience the capture in both AR and VR headsets, including the Quest 2 and Lenovo ThinkReality A3. I was also impressed with how easily HypeVR leveraged Snapdragon Spaces to build an app that demonstrates the real time AR volumetric video while recording it with Lenovo’s webapp on top of that. In fact, the HypeVR volumetric video is of the highest quality I’ve ever seen on the Quest 2 or Lenovo’s ThinkReality A3 headsets. It truly brings the realism to an entirely new level and, in my opinion, solves one of the biggest problems of virtual human solutions—the uncanny valley.
Volumetric video has many applications across the metaverse, spanning everything from training to telepresence to live events. Hype VR’s live real-time high-fidelity volumetric video solution and its mesh compression codec can help solve many different problems across many different industries. You can finally have real holographic “teleportation”—where people can virtually transport into another place with a photorealistic representation of themselves. This is truly instantaneous delivery, with no sort of post production delay. The content is available as its captured, making it a potential gamechanger for telepresence in meetings and other collaborative technologies. Furthermore, these scans aren’t limited to people—they could be leveraged in marketing and retail, say, to show people products that are on sale or provide virtual try-ons of clothing. Live concerts could finally look as good as they do in-person, with an actual, natural-feeling sense of depth and fidelity. Not only can HypeVR’s solution capture these videos in incredibly high fidelity in real time; its comprehensive compression algorithms also enable the real-time streaming of this content to anyone with a 5G connection on virtually any device. Taking a 16-RED Camera setup as an example, HypeVR has the capability to take a 96 Gbps uncompressed raw video and turn it into a 50 Mbps volumetric live stream at a compression ratio of 1920:1. This compression rate is a real breakthrough.
HypeVR appears to have solved some of the biggest barriers to the adoption of holographic video for immersive computing. Not only has it delivered one of the highest fidelity volumetric videos captures I have ever seen, but it can live stream that to devices in real time without losing quality. I believe HypeVR content could be used to quickly evaluate the visual fidelity of AR and VR experiences, given the high quality of the end visual. HypeVR’s volumetric video solution is also the industry’s first high fidelity solution that utilizes off-the-self hardware, a fact that is notable in and of itself. I have watched HypeVR hone its solution over the last several years, simplifying it and reducing the bitrate, all while improving its fidelity. There are many low-quality volumetric video experiences in the metaverse today that break immersion and feel insufficient. HypeVR’s live volumetric video streaming solution is poised to change that.
Note: Moor Insights & Strategy writers and editors may have contributed to this article.