Research Topic

Capturing and Sharing the Real World in Virtual and Augmented Reality

About this Research Topic

The rise in high-quality, consumer-level virtual and augmented reality headsets has created an incredible opportunity for everyone to share and experience their world in a powerful new way. However, capturing and efficiently representing real-world scenes and objects for immersive, photorealistic viewing in a headset remains an important and difficult challenge.

Traditional lightfield representations are powerful but require enormous computational resources to capture, represent and render. Furthermore, mixed-reality content is typically static and mismatched to the actual user's physical and social context. And, the virtual reality community has yet to fully embrace the incredible progress from the deep learning community in incorporating large amounts of data to improve their technology.

This Research Topic focuses on techniques and studies on capturing and sharing the real world in virtual and augmented reality. We encourage researchers to tackle the significant challenges and opportunities in VR/AR content capture and creation. We envision a new era of mixed reality where everyone can capture powerful VR/AR content and make it broadly accessible.

Advances in the design of geometric proxies and compression techniques are needed to make mixed-reality content more efficient to process, store and render. New techniques for mixed-reality content should build on recent techniques to adapt to the user's physical environment -- specifically to its layout, lighting conditions, and context. Meanwhile, deep learning methods offer significant and under-explored promise to leverage vast amounts of data to improve mixed-reality content capture, representation and rendering.

We welcome submissions of Original Research and Reviews on (but not limited to) the following topics:

- Capture, reconstruction, and/or rendering of a physical environment for VR viewing
- Adapting a virtual environment to fit a physical environment and the user's context
- Mono-to-stereo conversion for VR images and video
- Enabling free viewpoint rendering for VR content
- Capturing objects to view in AR
- Adaptive AR rendering to match the physical scene and illumination
- Handling dynamic scenes in reconstruction and rendering
- Autocompletion of sparse captures
- Incorporation of audio, haptics and other perceptual modalities
- Content editing and production for VR
- Perception, attention direction and depth understanding
- Capturing, representing and/or rendering of lifelike digital humans for VR
- Image/video compression algorithms for captured real-world content


Keywords: Mixed Reality, 3D Reconstruction, View Synthesis, Light Fields, Dynamic Reconstruction


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

The rise in high-quality, consumer-level virtual and augmented reality headsets has created an incredible opportunity for everyone to share and experience their world in a powerful new way. However, capturing and efficiently representing real-world scenes and objects for immersive, photorealistic viewing in a headset remains an important and difficult challenge.

Traditional lightfield representations are powerful but require enormous computational resources to capture, represent and render. Furthermore, mixed-reality content is typically static and mismatched to the actual user's physical and social context. And, the virtual reality community has yet to fully embrace the incredible progress from the deep learning community in incorporating large amounts of data to improve their technology.

This Research Topic focuses on techniques and studies on capturing and sharing the real world in virtual and augmented reality. We encourage researchers to tackle the significant challenges and opportunities in VR/AR content capture and creation. We envision a new era of mixed reality where everyone can capture powerful VR/AR content and make it broadly accessible.

Advances in the design of geometric proxies and compression techniques are needed to make mixed-reality content more efficient to process, store and render. New techniques for mixed-reality content should build on recent techniques to adapt to the user's physical environment -- specifically to its layout, lighting conditions, and context. Meanwhile, deep learning methods offer significant and under-explored promise to leverage vast amounts of data to improve mixed-reality content capture, representation and rendering.

We welcome submissions of Original Research and Reviews on (but not limited to) the following topics:

- Capture, reconstruction, and/or rendering of a physical environment for VR viewing
- Adapting a virtual environment to fit a physical environment and the user's context
- Mono-to-stereo conversion for VR images and video
- Enabling free viewpoint rendering for VR content
- Capturing objects to view in AR
- Adaptive AR rendering to match the physical scene and illumination
- Handling dynamic scenes in reconstruction and rendering
- Autocompletion of sparse captures
- Incorporation of audio, haptics and other perceptual modalities
- Content editing and production for VR
- Perception, attention direction and depth understanding
- Capturing, representing and/or rendering of lifelike digital humans for VR
- Image/video compression algorithms for captured real-world content


Keywords: Mixed Reality, 3D Reconstruction, View Synthesis, Light Fields, Dynamic Reconstruction


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Topic Editors

Loading..

Submission Deadlines

27 October 2020 Abstract
03 February 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..

Topic Editors

Loading..

Submission Deadlines

27 October 2020 Abstract
03 February 2021 Manuscript

Participating Journals

Manuscripts can be submitted to this Research Topic via the following journals:

Loading..
Loading..

total views article views article downloads topic views

}
 
Top countries
Top referring sites
Loading..