About this Research Topic
The rise in high-quality, consumer-level virtual and augmented reality headsets has created an incredible opportunity for everyone to share and experience their world in a powerful new way. However, capturing and efficiently representing real-world scenes and objects for immersive, photorealistic viewing in a headset remains an important and difficult challenge.
Traditional lightfield representations are powerful but require enormous computational resources to capture, represent and render. Furthermore, mixed-reality content is typically static and mismatched to the actual user's physical and social context. And, the virtual reality community has yet to fully embrace the incredible progress from the deep learning community in incorporating large amounts of data to improve their technology.
This Research Topic focuses on techniques and studies on capturing and sharing the real world in virtual and augmented reality. We encourage researchers to tackle the significant challenges and opportunities in VR/AR content capture and creation. We envision a new era of mixed reality where everyone can capture powerful VR/AR content and make it broadly accessible.
Advances in the design of geometric proxies and compression techniques are needed to make mixed-reality content more efficient to process, store and render. New techniques for mixed-reality content should build on recent techniques to adapt to the user's physical environment -- specifically to its layout, lighting conditions, and context. Meanwhile, deep learning methods offer significant and under-explored promise to leverage vast amounts of data to improve mixed-reality content capture, representation and rendering.
We welcome submissions of Original Research and Reviews on (but not limited to) the following topics:
- Capture, reconstruction, and/or rendering of a physical environment for VR viewing
- Adapting a virtual environment to fit a physical environment and the user's context
- Mono-to-stereo conversion for VR images and video
- Enabling free viewpoint rendering for VR content
- Capturing objects to view in AR
- Adaptive AR rendering to match the physical scene and illumination
- Handling dynamic scenes in reconstruction and rendering
- Autocompletion of sparse captures
- Incorporation of audio, haptics and other perceptual modalities
- Content editing and production for VR
- Perception, attention direction and depth understanding
- Capturing, representing and/or rendering of lifelike digital humans for VR
- Image/video compression algorithms for captured real-world content
Keywords: Mixed Reality, 3D Reconstruction, View Synthesis, Light Fields, Dynamic Reconstruction
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.