“Graphics and artificial intelligence are inseparable. Graphics needs AI, and AI needs graphics,” NVIDIA Founder and CEO Jensen Hung explained from the main stage at SIGGRAPH 2023, setting the tone for a packed week of research, innovation, and community. Given the historic nature of this event, it felt apt to declare this paradigm shift—entering the era of artificial intelliegence in graphics—as the industry considers what is in store over the next half-century.
The 50th SIGGRAPH conference, held in Los Angeles, August 6-12, 2023, was a celebration of the past, present and future of computer graphics and interactive techniques. For my first visit to this seminal event, and long-time AWE community partner, I wanted to learn how professionals working in graphics, 3D and animation use and interact with XR tools.
AI and graphics are core components of spatial computing, driving the experience layers of XR technology. Advancements in 3D pipelines, runtime 3D, and AI all build better, more seamless layering of personalized content in digital worlds. What you experience in AR and VR and how you discover it will likely soon be driven by AI customization.
Do these hardware manufacturers, creator platforms, studios, academics, and artists see their futures with AI, AR, and VR in the same ways I do?
Like most events this year AI and its role in optimizing creativity and workflows was top of the agenda at SIGGRAPH. By far the biggest take away from the conference and expo was that AI will soon be touching, making, localizing, optimizing and personalizing nearly every type of creative content.
While this gives 3D artists and teams impressive new creative tools, the Hollywood actors’ and screenwriters’ strikes on the local periphery, ensured that conversations about ethics and creative ownership abounded at the show as well. There was even a large union presence on the expo floor.
At the packed session “Sony Pictures Imageworks & Sony Pictures Animation Presents: “Spider-Man: Across the Spider-Verse” Artists from Sony Pictures Imageworks shared an exclusive behind-the-scenes look at the making of “Spider-Man: Across the Spider-Verse,” focussed on the artistic and technological innovations the production developed for this much-anticipated film included a generative painting tool that uses AI to develop realistic textures and lighting effects, scaling the creative concepts in each character’s worlds to super-human effects. *Above image source: Sony Pictures Animation.
Flawless AI, which was featured in Autodesk’s session Exploring the Transformative Power of AI in Media & Entertainment, shared how their AI workflows can seemingly fix any issue in post—including automated dialog replacement (ADR) or even matching the way an actor’s mouth moves to sync with foreign language dubbing—threw a lovely roofdeck party where we could connect with several AWE community XR creators and producers.
From these entertainment examples it was clear how AI personalization and localization will be so crucial in creating frictionless XR experiences that are accessible and relatable to a global audience.
Unity Technologies' Weta Tools shared many machine learning (ML) based technologies creators can use at each step of the 3D pipeline. XR creators will specifically love the Ziva Face Trainer which uses machine learning to train and rig digital characters for real time 3D scenes and applications.
Just as AI is advancing what’s possible on the graphics creation side of things, developments in 3D screen technologies, from the newest VR headsets to stereoscopic flat screens, are transforming how people can experience and interact with these dynamic and immersive types of content. Display technologies must grow and adapt to each new advancement in graphics—this is a virtuous reality for XR.
While researchers and technologists have been presenting futuristic display prototypes and methodologies for decades at the show, the sold-out VR theater experience and the long lines at every XR demo in the Expo and Experience hall signals that the broader SIGGRAPH audience is still building familiarity with these tools.
The connections between motion capture technology and body tracking in VR apps were on center stage as an ambitious booth by Vicon greeted guests as they entered the Expo with a multiplayer demonstration of a location based experience (LBE) with multi-sensory haptics, 6 degrees of freedom (6DOF), and quirky body-mapped avatars.
The announcement for the collaborative 16 minute immersive story notes that The Clockwork Forest was “the result of a massive R&D effort from Vicon and Artanim, the Swiss research institute behind the VR technology powering the Dreamscape platform [and] developed in partnership with the Swiss haute horlogerie [clock] manufacturer Audemars Piguet.”
Those lucky enough to snag a ticket, got to experience this VR adventure story for the first time without bulky tracking hardware like backpacks or gloves; improving the shared, immersive content and reducing the complexity of the set up.
VR headsets like the Meta Quest Pro had huge lines of graphics professionals and creators wanting to get their first look at one of the leading VR displays with some of the most consumer friendly 3D apps and XR creation tools.
The spacious Meta booth was filled with stations where guests could experience spatial audio, pass through, and other features with a number of business, creative, and wellness applications. It’s always nice seeing TRIPP VR featured as a stalwart brand included in Quest demos.
Advancements in eye-tracking, computer vision, and stereoscopic displays have brought flat-screen 3D to the forefront of “desktop XR.” Much like the magic perspective you see on 3D billboards in Tokyo and Times Square, these new 3D tablets feel primed for use in experience driven showrooms and personalized product campaigns.
Fresh off the announcement of their recent acquisition agreement, the Dimmenco + Leia Inc, booth celebrated a new vision for cross platform (Windows, Android) options for 3D visuals without eyewear. Their booth was packed with work stations and display demos so creators could dive into more interactive 3D workflows on the spot.
The Meta Reality Labs team won big with fans for their VR headset prototypes. Reality Labs took home the Best in Show award for their Reprojection-free VR Passthrough prototype and the Retinal-resolution Varifocal VR project, which you may have heard about on the Weekly Spatial Highlights, won for Audience Choice.
It was exciting to see the number of haptics vendors on the Expo floor and the amount of sensory prototypes on display in the Experience Hall. While much of the conference focuses on the graphics side of things, the interactive techniques at the show illustrated the myriad of ways 3D design can be extended into additional experience layers, dimensions, and creative tools.
The XR space is equally excited about haptics as many expect haptics tech to drive new spatial interfaces that will replace the pointers and hand controllers widely used in XR today.
Creators were eager to get hands-on with WEART haptic gloves. The WEART booth seemed to have a constant line of people eager to try the simple form factor and multi-sensory feedback that helps “users can get more accurate details of virtual objects and experience lifelike interactions (with them).”
History was on display across every inch of this 50th event. And the Japanese research teams from the Tachi Lab, Inami Lab, and Embodied Media Lab understood the assignment and brought an impressive mini-museum of haptics demonstrations from 20 years of presenting at SIGGRAPH.
Kouta Minamizawa, Ph.D. of the Embodied Media Lab walked me through hands-on demonstrations of the decades of work on display. Seeing these interaction research projects in this chronological history, highlighted their ingenuity—showing just how ahead of their time some of these prototypes were.
Notably, there were also many explorations of the sensations around eating and consuming digital foods on display across the conference and experience hall. These included examples that simulated everything from food textures and tastes all the way to simulated swallowing.
While definitely one of the more personal senses to explore with technology, people’s curiosity about engaging with digital objects and assets cannot be denied with so many examples in this category on view.
From this 50 year survey of computer graphics and interactive techniques, it’s clear to see that the road to XR started in some of the sessions, papers, and demonstrations at past SIGGRAPHs.
Today, these pros continue to push the boundaries of how they create, activate and interact with their creative works in ways that bring digital and 3D ideas into the same frame of view as our real world – exploring worlds of film in a realistic way, or using virtual objects and ideas in IRL work.
The connections between the computer and the content are becoming clearer every day. Head mounted displays now act like keys to finding and manipulating new types of 3D content in entertainment, academia, and business.
The lines between digital content and physical content are blurring provoking new questions to be answered at future events.
“How can we interact and work with 3D creative and ideas in meaningful ways? What’s the best way to share, distribute, and consume these works. How can we extend the best parts of today’s experiences into the future? How can digitizing ideas like emotions, empathy, or even food help us feel more connected to the people and world around us?“
If you’re curious about these topics, or want to help shape the answers—there’s no better way to understand future technology than to dive-in and immerse yourself in a passionate community of professionals and creative technologists.
Here’s to 50 more years, SIGGRAPH!