Today, extended reality technologies are becoming increasingly accessible and therefore a more plausible area of investigation when creating sonic artefacts within immersive environments. Music composition, and audio systems, benefit from such immersive 3D environments because their mixed spatial features open up new avenues for musical expression, compositional thinking and human interaction with computing systems. Such as challenging the boundaries between physical/virtual instruments, spatialiasing sound, investigating new methods for sonic interactivity, using machine learning/AI to create human-led interactions with mixed spatial media and using the extended reality space to create accessible formats for enjoying audiovisual content. With successful enquiry, new methods for music composition, artistic immersion, data auralisation/sonic insights (regarding data sonification) and accessing artistic content are possible within extended reality.
The goal of this Research Topic is to investigate how new compositional and interactive audio systems can be built and designed (including accessibility enquiries) within extended reality to inform new methods for artistic practice within this emerging media. This is a challenging area because of a lack of literature and artistic output when applying artistic practice to extended reality tools, due to the novel nature of this space. This area is also challenging because traditionally artistic communities do not possess the technical knowledge to pair artistic practice skills with ML/AI affordances. Recent advances in this field include processing human biometrics with AI towards advanced sonic-centred HCI and developing smart music tutors.
The scope of this Research Topic aims to address all applications of interactive audio within the umbrella of extended reality, including for the creation of artistic products, for example:
- Interactive music composition systems (artefacts and demonstrations)
- Gestural interfaces for sonic-centric human-computer interaction (development of hardware & applications)
- Audio-visual interaction systems for educational outcomes (e.g., data visualisation and accessibility)
- Internet-of-things (IoT) centred interaction within immersive environments
- Machine learning and AI within immersive environments
The Research Topic Coordinator for this Research Topic is Dr Chris Rhodes. Primarily, Chris is a Lecturer in Digital Media Production at University College London, UK. He also holds a Research Fellowship (EPSRC Doctoral Prize) at the University of Manchester, UK, investigating AI and music performance. Chris’ research interests concern the use of interactive and immersive technologies towards future creative music systems.
Today, extended reality technologies are becoming increasingly accessible and therefore a more plausible area of investigation when creating sonic artefacts within immersive environments. Music composition, and audio systems, benefit from such immersive 3D environments because their mixed spatial features open up new avenues for musical expression, compositional thinking and human interaction with computing systems. Such as challenging the boundaries between physical/virtual instruments, spatialiasing sound, investigating new methods for sonic interactivity, using machine learning/AI to create human-led interactions with mixed spatial media and using the extended reality space to create accessible formats for enjoying audiovisual content. With successful enquiry, new methods for music composition, artistic immersion, data auralisation/sonic insights (regarding data sonification) and accessing artistic content are possible within extended reality.
The goal of this Research Topic is to investigate how new compositional and interactive audio systems can be built and designed (including accessibility enquiries) within extended reality to inform new methods for artistic practice within this emerging media. This is a challenging area because of a lack of literature and artistic output when applying artistic practice to extended reality tools, due to the novel nature of this space. This area is also challenging because traditionally artistic communities do not possess the technical knowledge to pair artistic practice skills with ML/AI affordances. Recent advances in this field include processing human biometrics with AI towards advanced sonic-centred HCI and developing smart music tutors.
The scope of this Research Topic aims to address all applications of interactive audio within the umbrella of extended reality, including for the creation of artistic products, for example:
- Interactive music composition systems (artefacts and demonstrations)
- Gestural interfaces for sonic-centric human-computer interaction (development of hardware & applications)
- Audio-visual interaction systems for educational outcomes (e.g., data visualisation and accessibility)
- Internet-of-things (IoT) centred interaction within immersive environments
- Machine learning and AI within immersive environments
The Research Topic Coordinator for this Research Topic is Dr Chris Rhodes. Primarily, Chris is a Lecturer in Digital Media Production at University College London, UK. He also holds a Research Fellowship (EPSRC Doctoral Prize) at the University of Manchester, UK, investigating AI and music performance. Chris’ research interests concern the use of interactive and immersive technologies towards future creative music systems.