10:10 AM - 10:30 AM
The advances in connectivity are bringing us all closer together, digitally. We have more access to the world than ever before, but we want it personalized. The ability to push more high end content like volumetric video over 5G will get us up close and personal with our heros, our entertainment, and our history. The race for the best AR and VR hardware will start to edge into more and more homes as everyone recognizes the latest and greatest content is best seen in full dimensionality. Content is king and the world is starving for more. AI can accelerate the creation and drop the costs for producers. So what are the latest and greatest trends, who is pushing the gas pedal on infrastructure, and when can I get it?
08:20 AM - 08:40 AM
08:40 AM - 09:00 AM
Augmented Reality is emerging as the key medium for two-way remote collaboration applications to guide participants more effectively and efficiently via visual instructions. In technology support settings, such “Remote Assist” applications are increasingly used by remote support experts to guide field technicians through hardware installations and repairs to save cost, reduce repair time and eliminate errors. “Self-Assist” is the natural next step of AR-driven technology support, in which the field technician receives instructions directly from the AR system through virtual procedures. Needless to say, “Self-Assist” is more desirable as it eliminates dependency on remote experts thereby saving cost, and further reduces support timelines by providing instant access to the relevant information rather than having to wait for an expert to guide the repair. In this talk, we explore the “Remote Assist” to “Self-Assist” journey for technology support, and demonstrate how we leverage AI capabilities to enable “Self-Assist”.
09:20 AM - 09:40 AM
AI is only as good as its training data; a problem for Computer Vision where sourcing and prepping image data is slow and expensive. This is especially true for mixed reality applications as the industry transitions from AR based targets to true environmental interaction. In the near future, 5G is going to catalyze computer vision in ways that will be hugely enabling for the industry. LexSet was one of the winners of Verizon’s Built on 5G Challenge (as well as the winner for the Auggie for Startup to Watch at AWE 2019), their President and Co-Founder Leslie Karpas will unpack the problems with training computer vision models today, and explain how using synthetic data can solve them; as well at taking a look at how this dovetails with the incoming 5G future. Using the lense of LexSet’s flagship TDaaS (Training Data as a Service), we’ll go in-depth on how using 3D simulation to create photo-realistic synthetic data improves Vision AI model's overall accuracy at object recognition and spatial navigation.
09:40 AM - 10:00 AM
When it comes to design, visualizing the future means conceptualizing ideas that aren’t yet fully defined. Designing for AI requires a rethinking of how it is authored, trained, and managed. It’s largely uncharted territory and the necessity for invention is great.
Becoming fluent in this space challenges many designer norms: trading ‘in lorem ipsum’ for deeper entanglement with data; inventing by abstracting analogous conventions when there are no defined patterns to pull from. And even more interesting, tackling AI in the aspect of its role, expression, transparency and infallibility. It's a dramatic shift from pixel pusher to orchestrator. Hear seasoned designer, strategist, and innovator Matthew Santone's insights on tackling AI, the ethical challenges involved, and how designers can help steer the future.