SIGGRAPH 2017: Bring VR Into The Next Phase

By Anshel Sag - August 9, 2017
SIGGRAPH signage (photo source; Anshel Sag) Four years ago at SIGGRAPH 2013 in Anaheim, I witnessed something that set me on the career path that I am on today: for the first time, I caught a glimpse of what was possible with VR at an industry level, with multiple major companies starting to get serious about it. That SIGGRAPH was the first time that I genuinely believed that there was potential with VR and that—with some improvement—it had a chance. Fast forward four years later, and I’m back at SIGGRAPH, this time in LA. This time felt like a revisiting of VR, but less as a potential technology and more of a developing and improving one. SIGGRAPH has always been the show where the latest emerging technologies get shown—in fact; there is a section of the show floor specifically dedicated to emerging technologies. Some of the things that are shown at SIGGRAPH may never see the light of day with consumers, but others will eventually become real a few years down the road. SIGGRAPH is the graphics industry’s way of showcasing what they believe to be important and where they see the industry going over the course of the coming years. For that reason, the conference is crucially important to pay attention to for anyone looking to predict the industry’s future trajectory. At SIGGRAPH this year, I saw a lot of diversity in the Emerging Technologies Showcase and VR Village. Many of the technologies I witnessed at the VR Village appeared to be experiences that built on the existing VR platforms, in ways to make them more immersive and engaging. Some of those technologies did that through improved haptics and user feedback, while others did that through new types of experiences or narrative methods. At the VR Village, I saw many different types of added immersion technologies, like atmoSphere, that use audio and a haptic ball to help increase immersion with music (which could easily be paired with a VR headset). There was also the organic robotics lab, which used variable pressure in a Vive controller to simulate different material hardness (like a hard sword or a floppy foam finger). Also showcased was a submerged haptics technology that uses fingertip haptic airbags to create a ‘haptic display’ that allows you to feel objects without them physically being there. In addition to those experiences, I also saw the ‘Bottomless Joystick 2’: a wireless controller designed to simulate different haptic feedback, based on the type of device being simulated. Also demonstrated were multiple haptic technologies that use sound waves to simulate haptic feedback, including the Haptoclonear and the Touch Hologram in Mid-Air. Google also had a tiny booth inside of the VR Village that, in my opinion, showed off one of the more interesting implementations of mixed reality. Google created a virtual headset removal simulation, which allows users to see one another in mixed reality without the headset blocking the face or the eyes of a person. This is accomplished by scanning the person’s face with an Intel RealSense camera in 3D, and tracking their eye movement with built-in eye tracking inside of the HMD. The person’s facial movement is then tracked with a simple webcam to maintain accurate facial representation. Another popular booth showcased Neurable’s BCI attachment for the HTC Vive: designed to read your brainwaves and track your eye movement to understand what you are thinking. Neurable’s ambitious goal is to use brainwaves to create a new generation of user interfaces for VR. Unfortunately, I didn’t get a chance to try it—one of the headsets was broken for most of the show. Lytro was also in the VR Village, showcasing its latest VR experience: repurposing light field camera technology to make cinematic AR and VR. Ahead of the show, Magic Leap and NVIDIA also released papers about their most recent research into AR and VR. Magic Leap’s Deep SLAM uses neural networks to accelerate simultaneous localization and mapping (SLAM) without additional camera hardware. NVIDIA talked about its varifocal displays to help users focus more naturally while wearing AR and VR headsets. Additionally, NVIDIA was showcasing its Isaac robot training simulator inside of its project holodeck, using VR and AI together to teach a robot how to play dominoes. I played against the robot in VR, and while it kept up, for the most part, it needed more training. NVIDIA Robot (photo source; Anshel Sag) Dell also made a pretty big splash at the show, celebrating 20 years of its Precision professional workstations. Most of Dell’s celebration was forward-looking and showed how engaged the company already is in the VR space. Dell has 8 Centers of Excellence around the world that are helping the company accelerate VR creation and enable commercial VR. That was also where I tried out the latest revision of the Meta 2 AR glasses which had gotten much better than last year when I tried them last. HP Inc. also made an impression, with its own VR-focused event, centered around the Z VR Backpack PC and the company’s plan to open its own VR Centers of Excellence. The Z VR Backpack PC was born out of the lessons that HP learned from the Omen X prototype backpack PC, which I tried at SIGGRAPH last year. While I enjoyed using the Z VR Backpack PC with its hot-swappable batteries, I am not sure that the commercial market will need VR backpack PCs. However, HP does make the backpack dockable—giving it dual purposes. A side note: the Z Backpack PC was the backpack that StarVR used to demonstrate the latest version of its commercial headset this year, and it has gotten significantly better than what I saw from Starbreeze a year ago. In addition to the countless hardware from major industry players, there was also plenty of unique and original content being shown at SIGGRAPH—which I honestly did not expect. SIGGRAPH has shown that there is still a lot of innovation going on in VR and that there is plenty of room for the industry to grow and improve. Even companies like Magic Leap had a booth at the show, although it was just there to recruit new talent. Companies across the graphics industry are starting to figure out where their place is in the VR and AR landscape; others have already figured that out, and are busy refining their products and messaging. VR is currently entering the commercial phase and becoming a tool for creators, engineers, and marketers; while we are likely years away from truly mainstream AR and VR, we are getting closer with every year.        
VP & Principal Analyst | Website | + posts

Anshel Sag is Moor Insights & Strategy’s in-house millennial with over 15 years of experience in the IT industry. Anshel has had extensive experience working with consumers and enterprises while interfacing with both B2B and B2C relationships, gaining empathy and understanding of what users really want. Some of his earliest experience goes back as far as his childhood when he started PC gaming at the ripe of old age of 5 while building his first PC at 11 and learning his first programming languages at 13.