Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Virtual Real.

Sec. Virtual Reality and Human Behaviour

This article is part of the Research TopicHuman Factors and Design in Immersive and Generative Media TechnologiesView all 4 articles

AIMERS: An AI-based MR Scene Design System with Human-Centric Perception Optimization

Provisionally accepted
  • 1The Hong Kong Polytechnic University, Hong Kong, Hong Kong, SAR China
  • 2School of Design, The Hong Kong Polytechnic University, Hong Kong, Hong Kong, SAR China

The final, formatted version of the article will be published soon.

Visual realism is fundamental to convincing Mixed Reality (MR) experiences. However, current design workflows implicitly assume that physically-based rendering parameters naturally lead to perceptually realistic results. We begin from the opposite hypothesis: physically accurate parameters and users’ perceived realism are often misaligned, leading to inconsistent visual fusion and significant design overhead. To address this problem, we present AIMERS, an AI- and perception-guided MR scene design framework. First, AI-based neural inverse rendering is used to automatically estimate lighting-independent material, geometry, and illumination properties from multi-view RGB inputs, removing the need for manual material calibration. We then introduce an interactive MR perceptual interface that allows users to adjust key realism parameters during immersive viewing, enabling us to capture perception-aligned preferences across scenes. By jointly analyzing physically derived parameters and perceptual data, we derive optimal parameter intervals that best match perceived realism across different scene categories. Controlled user studies reveal a consistent mismatch between physical correctness and human perception, and demonstrate that combining AI estimation with perception-guided adjustment leads to more coherent and convincing MR visual fusion. Overall, this work establishes a perception-aligned paradigm for MR scene design, bridging the gap between physical accuracy and human perception and providing practical guidance for building realistic MR applications.

Keywords: artificial intelligence, Inverse rendering, Mixed reality, Perceptual enhancement system, Research Methods, visual fusion

Received: 27 Oct 2025; Accepted: 15 Jan 2026.

Copyright: © 2026 Wei, WANG, Zhu and Luximon. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Yan Luximon

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.