08:00 AM - 08:20 AM
"Out There" is the first-ever immersive musical in Spatial Computing - a unique combination of storytelling, music and technology. "Out There" was introduced to the world at Comic Con Paris in late 2019, and is making its U.S premiere in 2020. Creator Thibault Mathieu (Wilkins Avenue) and Sound Supervisors Scot Stafford and Richard Warp (Pollen Music Group) will discuss the project's genesis, and how they produced and operated the largest-scale Location-Based Magic Leap experience to date, worldwide.
08:20 AM - 08:40 AM
Futurus partnered with United Way of Greater Atlanta, Everfi, and the NFL to create “Call the Play.” The experience is based on the Character Playbook, a digital learning platform designed to help middle school kids learn how to make tough decisions in challenging situations and develop healthy relationships.
The experience debuted at the Super Bowl Live Experience in Atlanta, Georgia in the nine-days leading up to Super Bowl LIII. Over 12,000 attendees passed through United Way of Greater Atlanta's booth. Individuals were given the opportunity to put on a headset and be transported to a 3D replica of the Mercedes-Benz Stadium where they were greeted by NFL Hall of Famer, Jerry Rice, who served as the host of the experience. From there, individuals were able to make selections and learn from how to navigate difficult situations in a safe, judgement-free zone. The children (and adults) who tried the experience learned what responses elicited the most ideal outcomes. This interactive, learning experience proved to be an effective way to communicate how best to navigate tough situations like bullying. It was presented in an engaging and memorable format with which the target audience was able to relate.
Sara Fleeman (United Way of Greater Atlanta) and Annie Eaton (Futurus) will walk audience members through the decision-making, development and deployment processes. They will share insights, challenges and solutions in a fast-paced deployment on the national stage.
Attendees of this session will learn:
• How to make tough decisions on a deadline for a national event.
• Rationale for using VR to create empathy and educate.
• Buying process for a company seeking creative thinking.
• The process behind delivering three digital mediums of characters in one experience – 2D video, 3D character modeling, and volumetric capture.
• Challenges and learnings from hardware deployment on-site.
08:40 AM - 09:00 AM
There is no design standard of interaction for virtual reality which makes it difficult to create intuitive experiences. We will go through use cases based on our learnings at LiveLike building apps for broadcasters to watch live sports events with friends and fans.
09:00 AM - 09:20 AM
Computers are rapidly evolving to better perceive the world through technologies like computer vision and voice interfaces. Simultaneously, users increasingly expect multisensory user experiences in lieu of traditional, two-dimensional audiovisual interfaces. How does this new technology affect our perception, and what can we learn from our perceptual systems to inform this multisensory design?
This talk will cover how the human brain perceives the world, how the brain perceives in immersive experiences, and how we can leverage this understanding to build a more empathetic future of spatial computing. Presenters Stefanie and Laura will draw from their expertise as both cognitive neuroscientists and user experience researchers in industry, sharing their observations from rigorous user research studies at the forefront of AR/VR content creation, and synthesis from previous neuropsychological studies on AR/VR interaction models.
09:20 AM - 09:40 AM
User flows for mixed reality are difficult to review effectively using traditional 2D tools like presentations and animated mockups. Instead of creating 2D storyboards, you can build prototypes straight in VR.
During the talk, we can review production workflows of companies like Walmart, Unity Technologies and Cartoon Network. Spatial apps need spatial tools. You get speed, effectiveness and feeling of the scale inside the VR. In XR there is a HUGE demand to quickly prototype ideas and have a very iterative process and an efficient workflow. We want to see more apps that feel like born in VR and not like a PC/web kind of app inside the VR, same true for AR with all limitations you have to count on usability, comfort of menu navigation and more.
Too much development is happening on guessing without a vivid immersive prototype, which can be approved by stakeholders, the cost of development, especially when it comes to XR, is very high. How can we easily and quickly create prototypes and see the whole simulation in a headset/in a device to decide whether to commit the budget and move forward? With VR prototyping and animation software tools like Tvori or Microsoft Marquette, you can quickly create 3D UI and with Tvori you can animate your users workflow. Within just hours and days instead of months, you get a full simulation of the experience, in a headset, before you actually start coding.
Think of a VR game development in Unity. It is extremely hard to get a feeling of scale if you are not prototyping inside VR. You will have to keep putting ON and OFF the headset and probably not getting the proper scale.
Spatial apps need spatial creation tools.
09:40 AM - 10:00 AM
During the last ten years, augmented, mixed and virtual reality have been increasingly investing our lives. However, these immersive and interactive technologies are still not so well known by the general public. Due to a certain complexity in the creation as well as the distribution of such experiences, there is still quite a long way to go in order to truly democratize these technologies. However, as time passes, more and more solutions are engineered to that end. This talk will aim at overviewing the current world of creation and distribution of XR experiences by and for the general public.
10:10 AM - 10:30 AM
In 2019, Synesthetic Echo made and released Bumblebee Jam, an explorative choose-your-own-adventure musical experience, set in a world where flowers produce music. It was of the first games / creative tools played exclusively with AR audio wearables, such as Bose AR glasses and headphones. Designing for the new medium always presents challenges, so in this talk, Maria will share her design process and discoveries that helped to make Bumblebee Jam accessible and memorable experience.
Attendees will learn what are the primary design considerations for AR audio, how to set up a fast development pipeline, how to structure user tests and how to ensure a smooth publishing process.
10:30 AM - 10:50 AM
Voice cloning has made tremendous progress in the last couple of years and is shaping up to shake the increasingly audio-focused world. We'll demonstrate how synthetic voices can integrate with AR/VR experiences and how AR/VR creatives can get started. We'll discuss the importance and engagement benefits of augmenting voice experiences with custom voices, dive into technical details, discuss the current state of the art in voice cloning, see dozens of examples of where it can make an impact, and give a glimpse of what the future holds.
10:50 AM - 11:10 AM
What is human movement data? What is it good for? And how can it take XR experiences and spatial computing to the next level?
Sarah will examine cognitive neuroscience research around the importance of 3D human movement data for creating empathetic connections and learning. She will share the hidden potential XR has for impacting the brain in pro-social ways utilizing this data and touch on the possibilities of combining AI with these data sets. If time allows she will give a short demo of MEU which is he first social platform built around 3D movement data and part of the AWE Playground.