by Suzan Oslin | eXperience aRchitect
The future is being built with hardware and software we use will do more than help us communicate with each other. They will create new interfaces and with it, new design processes, thought patterns, and interactions. Virtual reality, smart cities, and brain-computer interfaces won’t only streamline our lives, they will give us deeper insights into our own psyche.
HP launched to the market a brand new virtual reality solution. “For the first time, HP brings not just a headset, it brings an SDK to the marketplace that takes advantage of the bioanalytic sensors in the headset,” said Elias Stephan, Head of Business Development at HP. HP has been working on designing a product to go to market and work with independent software vendors across the world on the Omnicept ecosystem.
The HP Omnicept solution was released on April 30, 2021. “As an HP strategy in virtual reality, we introduced a product last year called HP Reverb G2 and that product is doing absolutely phenomenal,” said Stephan. The Reverb G2 is designed for consumers and businesses. The Omnicept is “focused on developers and enterprise accounts… we think it’s the most intelligent VR headset by far,” Stephan said.
HP released their SDK to developers as part of an Early Access Program. Stephan explained, HP Omnicept SDK has machine learning and artificial intelligence that makes sense out of all the data captured by the headset. They’re calling it Cognitive Load Insight.
Cognitive Load will show trainers how well their students understand the content they’re consuming. “Cognitive Load Insight is basically a powerful predictor of brain capacity, performance, and expertise,” said Stephan. High cognitive load can signal poor usability. Several companies are also using it as a usability testing mechanism.
In order to capture enough data to calculate cognitive load, HP redesigned its headset. It has four cameras, a heart rate PPG sensor on the forehead, eye tracking, and a face camera. “We’re not just capturing where your eyes are looking, we’re capturing your pupil dilation as well,” said Stephan.
The headset also has a camera that’s capable of taking 3D images of your face to capture expressions in real-time. “Imagine the goggles on your face with a 3D camera looking at the bottom half, we’re going to capture exactly what you look like,” said Stephan. “What are we going to do with this? Expressions. Avatars that look like you and me, real ones.”
HP plans to introduce Expressions and eventually Emotions. The headset will be able to read the user’s expression--useful for avatar interactions and establishing more authentic connections in VR. Reading and predicting emotions will open up many possibilities for emotional well-being and health. Stephan believes all the sensor data captured to show expression in the avatars are key for training and health care applications.
Dr. Ramses Alcaide, CEO at Neurable has been working on brain-computer interfaces for the last decade. Neurable’s concept of a brain-computer interface (BCI) is different from Elon Musk’s Neuralink where a chip is implanted directly in the brain.
“We want to create a world where people with or without impairments can equally participate in the world,” said Dr. Ramses Alcaide, CEO of Neurable. Neurable attached their brain sensors to a VR headset in 2018 where users could direct objects in VR with specific interactions. Dr. Ramses explained, “Virtual reality headsets are great. But these aren’t the only devices that can augment and enhance our lives.”
Although the BCI technology worked, there wasn’t a market for it at the time. Neurable’s mission became finding practical, everyday uses for its BCI technology. That’s where Enten was born—Neurable’s first consumer-facing product, a pair of BCI-enabled headphones to help people improve their concentration and focus.
Dr. Ramses pointed to his headphones. “What you guys haven’t noticed is that this entire time I’ve been controlling the slides with my headphones. I have not been using my hands,” Dr. Ramses then held up his hands on the camera and switched between slides with the power of thought. The future of brain-computer interfaces is “hands-free, voice free” according to Dr. Ramses. “In the future, you’re not going to strap on one device. You’re going to put on a pair of AR glasses, Apple Airpods, or headphones and that will be your computer.”
Dr. Ramses says the power of BCIs is that they show your cognitive state because they track brain data over a longer period of time. Our latest product, Enten (which comes from the Spanish word entender, which means “to understand”) is a brain-computer interface that works for anyone, seamlessly. The signals that are built into the Enten cushions, connect to a dashboard to provide insights into a user’s concentration levels, what time of day they work best, what music helps them focus, and other data points connected to focus and productivity.
BCIs like Neurable notice when you enter a focused state and automatically turn on “do not disturb” and noise canceling. When Enten notices a user is coming out of that mode, it can play music to help them stay concentrated on what they’re trying to work on. “In the world of brain-computer interfaces, we haven’t seen these types of interactions before, until now. You can understand intent and provide value to the user without them noticing,” Dr. Ramses said. “The design space is enormous.” All that design leaves the potential for over-stimulation. Dr. Ramses warned, “We need to learn how to diminish information as much as we need to learn how to augment it.”
Haptic technology is about to change the game. Dave Birnbaum is the Senior Director of Technology Strategy, Office of the CTO at Immersion and creator of the podcast, INIT. He believes haptic feedback is underutilized in spatial computing interfaces. “Ultimately, what haptics can do for an interaction is make it more intuitive because it leverages something called embodied cognition,” said Birnbaum. “Haptic interaction is universal… the language of touch transcends cultural boundaries.” Additionally, experiences in virtual reality that incorporate the sense of touch are more memorable.
Birnbaum’s career has focused on the design of haptic feedback in many contexts, from mobile phone vibrations to game immersion. Currently, he also studies interaction patterns in VR as a haptic consultant at VEIL. He explained, “Haptics is challenging to design because it’s dependent on the entire technology stack, and the weakest link limits what you can do.” Haptic design tools aren’t as advanced as they need to be to become a peer to audio and visual media. “That is a key thing that needs to happen in the next couple of years,” said Birnbaum. “An exciting development on the horizon is the arrival of the Tactile Internet... This will allow for actions, movement, and skills to flow across the internet as easily as information. And it will all be mediated by haptic technology that provides you a sense of digital touch.”
Birnbaum has developed new wearable form factors such as haptic finger rings and wrist cuffs. He believes these new types of devices will be the ideal interface to the Tactile Internet, but only if they include natural interaction mediated by haptic feedback. “We’ve tried to find interactions that were meaningful but didn’t require any visual feedback,” Birnbaum says as he shows a storyboard of a smart agent that could squeeze a user’s wrist through a wearable cuff. “If you play a gentle squeeze on the wrist combined with the warmth, it feels like a virtual hug that communicates love and affection.” Standardization in the field of haptic design tools will make such work easier. Ultimately, Birnbaum sees virtual reality or augmented reality as the fusing of fantasy and reality, with haptics playing a key role in bringing that vision to life.
Haptics can work subconsciously to impact performance or behavior, while at the same time reducing overall cognitive load during an interaction. “We believe sometimes there’s information delivered visually or auditorily that really is more appropriate for the haptic channel,” said Birnbaum. Our visual field is often overloaded with clutter and information. With some of that offloaded to touch, the overall experience can be smoother and more user-friendly.
Smart spaces are a part of creating rich experiences across the digital world. “Smart spaces are physical and digital environments in which humans and technology-enabled systems interact in increasingly connected, coordinated, and intelligent ecosystems,” said Simi Shenoy, Lead Product Manager at Magnopus. People, processes, services, and objects are all part of smart spaces. They come together for a targeted group of people or business case. Smart spaces go beyond a smart assistant in your home or built into your fridge.
“We’re working on creating spaces that people can consume across virtual and mixed reality,” said Shenoy. Digital twins of real-life objects are a way to give tools to decision-makers and operators. In smart cities, planners can use signs from electronic systems to shift how a city functions. “Cognitive load and TMI (too much information) is a real thing,” Shenoy said. “It’s our responsibility to filter out the noise and provide actionable content in context.” Understanding users’ needs is one of the core principles of human-centered design.
Shenoy said her Masters in Architecture from UCLA opened the doors for her to the XR industry. From her Masters, Simi designed a physical prototype of a kinetic pavilion that moves based on EEG readings from the performers. The pavilion changes throughout the day from a closed space during the day to open at the end of the day. At night, the EEG readings from a DJ shift the geometry of the pavilion. The idea is that the environment can affect us and we can affect it. This is more of an experiment for the future--a gateway for accessibility, human connection/communication, and well-being.
Shenoy built a virtual reality version of her pavilion. She explained, “The idea here was if you don’t have an audience on site, how can they experience this rich, ever-changing environment remotely?” The concept was to link the environment and lighting in real-time to understand how people could participate in physical spaces from anywhere in the world.
Unified user experiences across the digital and physical worlds have value beyond entertainment. In the context of today’s internet, we are bound to information and interface design based primarily in 2D. The internet of tomorrow is based on the spatial world we live in, with contextual information available in our immediate environment. This is where the evolution of smart spaces will manifest as a persistent web of connected spaces, transforming the ways in which we work, learn and play across the spectrum of reality.
Listening to the technologists talk, you can picture them all working together on one project. “I think what we talked about collectively today is all connected,” said Elias Stephan. Dr. Ramses added, “What’s really interesting to me is that all these ideas and devices are separated, but in the future, they’re all going to come together.”
Experimentation is essential to moving technology like BCIs, haptics, and smart spaces forward. Some companies are brave enough to build and invest in technologies that don’t have an immediate practical use because their leaders are driven by a vision of the future. We need to champion companies that dare to build technology that is ahead of its time, that dare to build technologies that might lack conspicuous and swift returns. And we really need to celebrate those that realize their vision in a practical way. Congratulations HP, Immersion, Neurable and Magnopus for the willingness to build and learn together as an industry, to move technology forward and improve human lives.
Watch the event recording at the AWE Nite Youtube channel: