10:05 - 10:30
In addition to supplying Moverio - a range of finished models of AR smart glasses, for certain verticals, Epson is now offering their core microdisplay technology together with the original optical engine to market.
The new solution is set to transform various emerging applications and innovative products, by enabling easy integration of AR capability and making it easy to develop a custom head-mounted binocular see-through display.
10:35 - 11:00
Moving from VR to AR enables us to visually engage in the real world but presents great challenges for optics. How do we create a display that does not look like a display?
11:05 - 11:30
What does it mean that something is "real"? In this talk, Nils Pihl of Auki Labs explores how language itself is augmented reality technology, and how the human shamanic impulse and memetics can inform us about the future of AR as a medium, and why sharable AR is the penultimate step in human communication before direct neural interfaces.
Exploring the history and intersection of language, AR and shamanic traditions through the lens of memetics and behavioral psychology, Nils explores what it means for something to be "real" and what "augmenting reality" really means in the context of human interactions.
11:35 - 12:00
Many businesses struggle with getting users on their SDK, platform, or product. Learn how you can grow your business by appealing to average consumers and working with the XR community. Create a win-win situation for XR businesses and users.
12:45 - 13:35
The recently funded Horizon Europe project XR4Human, comprises a consortium of researchers, developers and users to promote a Human-centered XR system. The consortium aims at developing European standards for XR that help accelerate an ethical and human-centered development process for hardware and software manufacturers, thereby benefiting technically and commercially from highly usable and inclusive systems. Privacy, safety, ethical, legal and interoperability issues are discussed with the view of bridging the gap between industry and academia as well as cross-industry collaboration, to solve the challenges of tomorrow.
Join these inspiring speakers for a discussion around responsible innovation that takes into consideration privacy, safety, ethical and legal implications and their intersection for a socially acceptable XR governance.
13:40 - 14:05
With the rise of 3D in eCommerce, retailers are creating their own 3D pipelines and workflows. We demonstrate that in order to reap the benefits of 3D technology on a large scale in eCommerce, it has become essential to reach alignment throughout the industry and introduce standards.
We will showcase what can be done right from the start to solve for all of the use cases that a retailer might have down the road. Our approach is to smartly streamline the 3D model creation and processing, therefore unlocking scalability. We will also include a practical example from the furniture industry, demonstrating how technology can help to reduce manual effort at different points in the process.
14:10 - 14:35
Many industry and construction 4.0 use cases such as maintenance, production line planning, or default detection require displaying in AR and updating over time complex Digital Twins at a very large scale. By moving spatial computing and rendering into the cloud or at the edge based on 5G connectivity, we will show how the AR cloud platform developed in the context of the EU Research and Innovation project ARtwin can meet these requirements, whatever the AR device capabilities.
14:40 - 15:05
The modern warfighter has limited situational awareness and performs under a high cognitive load while operating and executing missions utilizing autonomous systems. Currently, operators are head and eyes down, losing the ability to fully know what is happening in the environment around them.
The use of AI-generated 3D models and AR pre-visualization tools can eliminate mission repetition and reduce commander intent communication while allowing for full situational awareness. The warfighter will also see a reduction in the expertise required in unmanned system flight control enabling them to "launch and forget" until notified of pertinent activity.
XR tools enable seamless command and control of unmanned system while providing safety to the operator.
15:35 - 16:00
In the current ecosystem of Spatial artificial intelligence (SAI), access to, applications of and benefits from SAI continue to center power within a small number of individuals and organizations. Citizens are in no way able or prepared to engage with SAI as it exists today, an issue of inequity that will only grow in scope and scale as SAI’s capabilities, capacities and uptake also grow. To address this inequity, a holistic reimagining of the current ecosystem is needed. In this space, my colleagues and I have highlighted the need for what we call ‘Collective SAI’. As we imagine it, Collective SAI would be a citizen-run technological tool that, through its accessibility, transparency, independence and utility, would provide counterbalance to the current power dynamics that define the SAI ecosystem. By matching technology with technology in a practical and achievable way, such an approach is arguably our best hope to ensure equity and justice in the world of SAI now and into the future.
16:05 - 16:30
Established remote-assistance solutions allow remote experts to set annotations and highlights directly within the on-site user’s field of view. However, there is no shared realm of experience where both users can interact as if they were physically standing next to each other.
This talk presents an ongoing research project that aims to solve this problem by capturing and streaming the on-site user’s real 3D-environment (including persons) to create a shared sense of space. The remote expert, who uses a VR-HMD, is able to move freely within the live 3D reconstruction, whereas the on-site user is able to see him using an AR-HMD. The result is an intuitive and natural collaboration experience that solves the shortcomings of current solutions.
16:35 - 17:00
The increasing popularity of XR applications is driving the media industry to explore the creation and delivery of new immersive experiences. A volumetric video consists of a sequence of frames, where each frame is a static 3D representation of a real-world object or scene capture at a different point in time.
However, such contents can consume significant bandwidth. Compression of volumetric video is required to obtain data rates and files sizes that are economically viable in the industry. Standardization is required to ensure interoperability. The Visual Volumetric Video-based Coding (V3C) standard defines a generic mechanism for coding volumetric video and can be used by applications targeting different types of volumetric content, such as point clouds, immersive video with depth or transparency of visual volumetric frames.
The Moving Picture Experts Group (MPEG) has specified two applications that utilize V3C: Video-based Point Cloud Compression (V-PCC), and MPEG Immersive Video (MIV). This presentation provides an overview of the generic concepts of these technologies associated with the MPEG V3C system part for an efficient volumetric video-based media streaming in applications such as telelearning and XR experiences.
17:05 - 17:30
The technology that brought us Deepfakes is permanently reshaping the way we create content of all kinds - from movies and music to apps and websites. The idea of applying algorithms to manipulate or generate media isn't new, but several advances in computer science have brought the technology out of the halls of academia and into the hands of anyone and everyone with a desire to create. In this session, we'll explore how synthetic media and computational design are inspiring all sorts of machine-assisted creativity, how it's changing the overall creative process, and see examples of how this will bring about an explosion of immersive content that will shape the metaverse.