The human experience is inherently multi-sensory, and our understanding of the world is shaped by the interplay of our senses. In recent years, there has been a growing interest in the development of technologies that can capture, simulate, and manipulate multi-sensory experiences. Going beyond vision and hearing, some of these new technologies have incorporated the senses of smell, taste and touch, while exploring the whole spectrum of the real-virtual continuum.
Artificial Intelligence (AI) plays an important role in multisensory research by enabling the integration and analysis of large, complex datasets from multiple sensory modalities, providing a better understanding of how the brain processes and integrates information from different senses. In addition, AI algorithms such as machine learning can be used to make predictions and discover patterns in multisensory data that may not be apparent with traditional methods, providing a better understanding of the neural mechanisms underlying multisensory perception.
This Research Topic seeks investigations that bring together researchers from diverse disciplines, including computer science, engineering, psychology, neuroscience, education, healthcare, arts, and design, to explore the latest advances in technology applied to multi-sensory studies and experiences.
Examples of topics and applications include - but are not limited to - the following:
● Acquisition and generation of sensory data: Novel approaches to capturing, generating, and manipulating multi-sensory data. We have a particular interest in research including combined auditory, haptic, olfactory, and/or taste data. This includes the development of new sensors, algorithms, and data formats that can capture the richness and nuance of real-world experiences.
● Sensory substitution devices that seek to convey information from some sense(s) through another, for instance in the case of deficiency of the first.
● Multi-Sensory Fusion and Integration: Techniques for combining multi-sensory data to create an immersive, realistic, aesthetic and/or engaging experience. This includes the development of algorithms for data fusion and analysis, as well as the design of multi-sensory interfaces that can effectively present and integrate these data.
● Applications: Applications of multi-sensory technologies in a wide range of fields, including art, education, entertainment, healthcare, scientific research, and product development. This includes the development of new tools and methods for training, education, rehabilitation, scientific exploration, and product design that can leverage the power of multi-sensory experiences.
Examples of Virtual Reality (VR) and AI-powered multi-sensory applications:
● Digital commensality: Design of multimodal technologies for eating
together.
● AI-powered virtual tasting experiences: AI algorithms can analyze the chemical composition of food and beverages, creating virtual environments enhancing their taste and aroma, for instance, based on crossmodal correspondences of the chemical senses with visual, sonic and tactile features.
● Haptic interfaces for virtual experiences: Haptic feedback devices can simulate the sensations of touch, allowing users to interact with virtual objects and environments in a more tactile way. This technology can be used to create more immersive and engaging experiences.
● Design of listening multisensory environments in VR based in crossmodal correspondences between music and the senses.
● Using VR for sensory analysis.
Keywords:
Multisensory, Experiences, Environment, Applications, AI
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
The human experience is inherently multi-sensory, and our understanding of the world is shaped by the interplay of our senses. In recent years, there has been a growing interest in the development of technologies that can capture, simulate, and manipulate multi-sensory experiences. Going beyond vision and hearing, some of these new technologies have incorporated the senses of smell, taste and touch, while exploring the whole spectrum of the real-virtual continuum.
Artificial Intelligence (AI) plays an important role in multisensory research by enabling the integration and analysis of large, complex datasets from multiple sensory modalities, providing a better understanding of how the brain processes and integrates information from different senses. In addition, AI algorithms such as machine learning can be used to make predictions and discover patterns in multisensory data that may not be apparent with traditional methods, providing a better understanding of the neural mechanisms underlying multisensory perception.
This Research Topic seeks investigations that bring together researchers from diverse disciplines, including computer science, engineering, psychology, neuroscience, education, healthcare, arts, and design, to explore the latest advances in technology applied to multi-sensory studies and experiences.
Examples of topics and applications include - but are not limited to - the following:
● Acquisition and generation of sensory data: Novel approaches to capturing, generating, and manipulating multi-sensory data. We have a particular interest in research including combined auditory, haptic, olfactory, and/or taste data. This includes the development of new sensors, algorithms, and data formats that can capture the richness and nuance of real-world experiences.
● Sensory substitution devices that seek to convey information from some sense(s) through another, for instance in the case of deficiency of the first.
● Multi-Sensory Fusion and Integration: Techniques for combining multi-sensory data to create an immersive, realistic, aesthetic and/or engaging experience. This includes the development of algorithms for data fusion and analysis, as well as the design of multi-sensory interfaces that can effectively present and integrate these data.
● Applications: Applications of multi-sensory technologies in a wide range of fields, including art, education, entertainment, healthcare, scientific research, and product development. This includes the development of new tools and methods for training, education, rehabilitation, scientific exploration, and product design that can leverage the power of multi-sensory experiences.
Examples of Virtual Reality (VR) and AI-powered multi-sensory applications:
● Digital commensality: Design of multimodal technologies for eating
together.
● AI-powered virtual tasting experiences: AI algorithms can analyze the chemical composition of food and beverages, creating virtual environments enhancing their taste and aroma, for instance, based on crossmodal correspondences of the chemical senses with visual, sonic and tactile features.
● Haptic interfaces for virtual experiences: Haptic feedback devices can simulate the sensations of touch, allowing users to interact with virtual objects and environments in a more tactile way. This technology can be used to create more immersive and engaging experiences.
● Design of listening multisensory environments in VR based in crossmodal correspondences between music and the senses.
● Using VR for sensory analysis.
Keywords:
Multisensory, Experiences, Environment, Applications, AI
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.