Your new experience awaits. Try the new design now and help us make it even better

MINI REVIEW article

Front. Educ., 01 January 2026

Sec. Special Educational Needs

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1715885

This article is part of the Research TopicInclusion in Non-formal Education Places for Children and Adults with Disabilities Vol. IIView all 12 articles

Inclusive education beyond formal settings: AR and VR as accessibility strategies for deaf and hard-of-hearing individuals

  • 1Department of Didactics and Educational Research, University of La Laguna, San Cristóbal de La Laguna, Spain
  • 2Department of Specific Didactics, University of La Laguna, San Cristóbal de La Laguna, Spain

This scoping review synthesizes the evidence on how augmented reality (AR) and virtual reality (VR) enhance accessibility and participation for deaf and hard-of-hearing individuals in non-formal educational settings (e.g., museums, libraries, heritage sites and community programs). Following the PCC and PRISMA-ScR frameworks, studies published between 2015 and 2025 in English, Spanish, Portuguese or French were searched in Scopus, Web of Science and PubMed. After peer screening, 9 manuscripts were analysed (from 246 unique records; 27 full texts reviewed). The most frequent contexts included libraries (orientation and anxiety reduction), cultural experiences with immersive subtitling, technical training using AR and sign language, and autonomous/community-based learning. Visual accessibility features predominated (subtitles, sign language support, 3D visualizations), with generally positive but exploratory evidence (small samples, short-term studies). We conclude that AR/VR have clear potential, but more robust designs, standardization (e.g., subtitles/avatars), co-design with deaf communities and evaluations of sustainability and scalability are needed.

1 Introduction

The inclusion of deaf and hard-of-hearing individuals in non-formal education requires identifying and eliminating communication barriers. From the perspective of the social model of disability, disadvantage arises from the interaction with environments not designed for diversity (Oliver, 1990; Shakespeare, 2006). Consistently, both the International Classification of Functioning, Disability and Health (ICF) and Web Content Accessibility Guidelines (WCAG)/World Wide Web Consortium (W3C) position accessibility as a structural condition for participation (WHO, 2001; World Wide Web Consortium (W3C), 2023). In the field of education, Universal Design for Learning (UDL) proposes multiple means of representation, action/expression, and engagement, including for sign language users, to reduce the need for ex post adaptations (CAST, 2018; Meyer et al., 2014), while UDL advocates for perceptible, operable, and understandable solutions from the outset.

Within this framework, VR/AR provide affordances that are particularly relevant for deaf and hard-of-hearing users: (i) enriched visual channels (overlays, immersive subtitles, 360° signage); (ii) integration of sign language (video interpreters, avatars, recognition/production using gloves/IMUs/computer vision); (iii) multimodal interaction (haptics, eye-tracking for selection or pictographic dialogue); and (iv) safe training and spatial orientation prior to real-world contexts (libraries, museums, workplaces). Recent evidence suggests improvements in visual comprehension, engagement and specific learning outcomes (e.g., vocabulary and procedures), though mostly based on small samples, exploratory designs, and a lack of longitudinal evaluation—highlighting the need to examine the reach and sustained effects of these tools (Borna et al., 2024; Fernandes et al., 2024).

In non-formal education, documented applications include: library orientation and anxiety reduction via VR and accessible supports (Ariya et al., 2024; Intawong et al., 2025); cultural/museum experiences with immersive subtitling and display preferences in VR/360° (Agulló and Matamala, 2019; Brescia-Zapata et al., 2025; Oncins et al., 2020); workplace training using AR and sign language support (Oral and Kalkan, 2025); autonomous and community-based learning of sign language and literacy via AR/VR (Al-Megren and Almutairi, 2018; Economou et al., 2020; Novaliendry et al., 2023); and accessible communication using pictograms and eye-tracking in critical situations (Wółk, 2019). Technically, progress has been made in gesture recognition (static/continuous) using gloves and inertial sensors (Achenbach et al., 2023; Halabi and Harkouss, 2025; Li et al., 2023) and in avatar-based synthesis, although these still require studies on comprehensibility, naturalness and acceptability by deaf and hard-of-hearing users to enable their routine educational adoption (De Martino et al., 2017).

Challenges remain, including fragmentation across platforms, functionalities and heterogeneous objectives; lack of standardization (e.g., immersive subtitle conventions, avatar parameters); limited participation of the deaf community in co-design and evaluation; and scarce evidence on sustainability and scalability (costs, maintenance, mediator training). Reviews of AR/VR in education reinforce the need to map environments, accessibility functions and outcomes (access, participation, learning, satisfaction), as well as methodological gaps to guide a cumulative research agenda (Fernandes et al., 2024; Radianti et al., 2020; Wu et al., 2020). This context underpins the scoping review presented here, which focuses on non-formal education and immersive solutions aimed at removing barriers for deaf and hard-of-hearing individuals.

General Objective. To map the use of VR/AR for accessibility and inclusion of deaf and deaf and hard-of-hearing individuals in non-formal education (2015–2025).

Research Questions:

RQ1: In which non-formal settings is VR/AR used, and for what purposes?

RQ2: What accessibility features are integrated (e.g., subtitles, live sign language/avatars, visualizations, haptics)?

RQ3: What outcomes are reported (access, participation, learning, engagement, satisfaction)?

RQ4: What barriers, enablers and gaps are identified (technological, pedagogical, ethical, cost/sustainability)?

2 Materials and methods

2.1 Design

This study was conducted as a scoping review, following the methodological framework proposed by Arksey and O’Malley (2005), later expanded by Levac et al. (2010), and reported in accordance with the PRISMA-ScR guidelines (Tricco et al., 2018). Given the exploratory nature of the topic and the heterogeneity of studies and technological prototypes, this design was deemed most appropriate. The objective was to map the existing evidence on the use of virtual and augmented reality (VR/AR) to enhance accessibility for deaf and hard-of-hearing individuals in non-formal education settings, to identify the main accessibility features (immersive subtitling, sign language, multimodal interaction) and to highlight research and practice gaps. We define non-formal contexts in functional terms: voluntary activities that are not linked to curricular activities or formal education. Under this criterion, spaces located within formal institutions (e.g., libraries or university laboratories) may be included, provided that deaf and hard-of-hearing individuals interact with AR/VR as users of a service or resource, rather than as part of a formal teaching activity. Curricular interventions (classes, practical sessions, assessable activities) were excluded, even when they take place in the same spaces.

2.2 Inclusion and exclusion criteria

See Tables 1 and 2.

Table 1
www.frontiersin.org

Table 1. Inclusion and exclusion criteria.

Table 2
www.frontiersin.org

Table 2. Truncated terms and search equation used for each database.

2.3 Study selection procedure

See Figure 1.

Figure 1
Flowchart illustrating the selection process of studies. Identification: 253 records from databases; 246 after duplicates removed. Screening: 27 records screened; 219 excluded. Eligibility: 27 full-text articles assessed; 18 excluded based on criteria. Included: 9 studies for qualitative synthesis.

Figure 1. Flow diagram. Updated to studies finally included; reasons for exclusion detailed.

2.4 Inter-rater agreement analysis

To ensure consistency in the selection process, two independent reviewers screened and assessed the studies in parallel based on the predefined criteria (Figure 1; Table 1). After removing duplicates, 246 unique records were identified; 27 were reviewed in full text and, after resolving a single discrepancy by consensus, 9 studies were included in the qualitative synthesis (see Tables 2, 3). The observed agreement rate was 96.3%. To estimate agreement beyond chance, the Perreault and Leigh coefficient was calculated:

I = [ ( F 1 ) / F ] [ 1 ( Σ p _ j ( 1 p _ j ) ) / ( N ( k 1 ) / k ) ] .
Table 3
www.frontiersin.org

Table 3. Analysed documents.

I = Reliability coefficient.

F = Number of judges or evaluators.

p_j = Proportion of judges who assigned a response to category j.

N = Number of items or units assessed.

k = Number of possible response categories.

3 Results

Based on the established criteria, nine studies published between 2018 and 2025 were included. These studies focused on the use of VR and AR in non-formal educational contexts involving deaf or hard-of-hearing individuals. The findings are presented below, organized according to the previously stated research questions.

The results are presented in accordance with the study’s objectives and research questions.

3.1 Non-formal settings and purposes of use (RQ1)

The analysed studies reveal a wide variety of non-formal settings for AR/VR experiences. In the cultural and recreational domain, applications included museums, accessible digital narratives, and immersive audiovisual experiences designed to improve accessibility (Agulló and Matamala, 2019). In libraries, virtual orientation systems were implemented to facilitate service use and reduce first-visit anxiety (Ariya et al., 2024; Intawong et al., 2025). In non-formal workplace training, technical instruction in the metallurgical industry was delivered via mobile AR using 3D models and sign language (Oral and Kalkan, 2025). Additionally, applications were documented for autonomous learning at home and in community settings—such as early literacy and sign language learning (Al-Megren and Almutairi, 2018; Novaliendry et al., 2023)—along with proposals in religious and cultural education (Quran learning through AR; Ahmad Yusoff et al., 2025) and literature reviews across different disciplines (Fernandes et al., 2024). Taken together, the objectives encompassed literacy, cultural access, spatial orientation, technical training, and inclusive communication, confirming the versatility of these technologies beyond formal education.

3.2 Integrated accessibility functions (RQ2)

The reviewed applications incorporated multiple accessibility functions. Customizable, immersive subtitles (size, color, position) were identified as the primary access channel in audiovisual environments (Agulló and Matamala, 2019). Sign language appeared in several forms: interpreters in VR (Ariya et al., 2024), specialized technical videos (Oral and Kalkan, 2025), 3D animations for vocabulary/literacy (Novaliendry et al., 2023) and proposals for integration into digital materials (Al-Megren and Almutairi, 2018). Three-dimensional models and visual resources were also included for technical objects or religious symbols (Oral and Kalkan, 2025; Ahmad Yusoff et al., 2025), along with haptics, customizable audio and adapted navigation (Ariya et al., 2024; Intawong et al., 2025). Nevertheless, the absence of automatic subtitling, sign language avatars and integrated multimodal solutions was noted.

3.3 Reported outcomes (RQ3)

Overall, the studies agree that AR/VR facilitates the comprehension of visual content, spatial orientation, and sign language learning. Participation was generally high in immersive/interactive experiences, except for technological developments that lacked empirical testing (Rum and Boilis, 2021). In terms of learning, improvements were reported in vocabulary, literacy and technical knowledge within workplace and non-formal settings (Fernandes et al., 2024; Oral and Kalkan, 2025). Engagement was reported as high due to realism and visual appeal, although visual fatigue and cognitive overload were noted with prolonged subtitle use (Agulló and Matamala, 2019). Finally, satisfaction was positive, with emphasis on the usefulness, attractiveness and ease of use of the applications.

3.4 Barriers, enablers, and identified gaps (RQ4)

Among the enablers, noteworthy factors included interdisciplinary collaboration in prototype design (Ariya et al., 2024), families’ familiarity with mobile devices (Al-Megren and Almutairi, 2018), and 3D visualization of complex objects and contexts, which enhanced technical and cultural understanding (Oral and Kalkan, 2025). Regarding barriers, technical issues were identified (cybersickness, poorly optimized controls), as well as the high cost of AR devices, lack of standards for immersive subtitling, and limited availability of content in local sign languages. The most relevant research gaps include the absence of longitudinal studies on sustained impact, the lack of comparisons between accessibility features (e.g., subtitles vs. avatars), limited exploration of costs and sustainability, and weak integration with existing digital services in libraries, museums, or community programs.

4 Discussion

The reviewed studies confirm that libraries, museums and workplace/community contexts are suitable settings for applying AR/VR with inclusive purposes, reinforcing the innovative potential of non-formal education highlighted by OECD (2020). However, most experiences remain prototypes or pilot projects, consistent with Akçayır and Akçayır (2017), and thus, sustained impact has not yet been demonstrated. Functionally, immersive subtitles, sign language supports, and 3D visualizations have been explored. However, the lack of standardized and scalable solutions persists: the integration of subtitles and sign language remains fragmented (Chemnad and Othman, 2024), and emerging technologies such as automatic avatars (Kipp et al., 2011) have scarcely been transferred to AR/VR in non-formal education. Conceptually, inclusion should be evidenced as perceivable, operable, understandable, and participatory access.

The results suggest improvements in comprehension, motivation and specific learning outcomes, though based on limited and short-term evidence. Previous reviews had already noted small sample sizes and exploratory designs that limit generalizability, particularly in deaf populations (Radianti et al., 2020; Wu et al., 2020). Beyond technical or cost-related barriers, the limited participation of deaf communities in design processes stands out, despite strong evidence of the need for co-design to achieve effective inclusion (Kusters et al., 2017; Young and Temple, 2014). Questions remain regarding sustainability, transferability to real contexts (museums/libraries), and the evaluation of ethical impacts.

Methodologically, use a core outcome set (access, participation, learning, engagement, satisfaction), disaggregate deaf or hard-of-hearing profiles, and test co-designed solutions in real venues.

In essence, AR/VR play a promising role in enhancing accessibility in non-formal education. However, more robust evidence, standardization of best practices, and active involvement of the deaf community throughout the design and implementation cycle are needed.

Author contributions

MG-A: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Visualization, Writing – original draft, Writing – review & editing. CP-L: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing. ZP-C: Conceptualization, Methodology, Resources, Supervision, Validation, Visualization, Writing – review & editing. DP-J: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing.

Funding

The authors declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Generative AI statement

The authors declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Achenbach, P., Laux, S., Purdack, D., Müller, P. N., and Göbel, S. (2023). Give me a sign: using data gloves for static hand-shape recognition. Sensors 23:9847. doi: 10.3390/s23249847

PubMed Abstract | Crossref Full Text | Google Scholar

Agulló, B., and Matamala, A. (2019). Subtitling for the deaf and hard-of-hearing in immersive environments: results from a focus group. J. Spec. Transl. 32, 217–235. Available online at: https://jostrans.soap2.ch/issue32/art_agullo.pdf

Google Scholar

Ahmad Yusoff, N. F., Abdul Wahab, A. H., Abdul Rahman, A., Noh, N. M., and Sulaiman, N. A. (2025). Fuzzy delphi method: designing a Tadabbur Al-Quran model in Arabic vocabulary learning for hearing-impaired Muslim adults and assisted with augmented reality technology. Int. J. Learn. Teach. Educ. Res. 24, 250–268. doi: 10.26803/ijlter.24.5.13

Crossref Full Text | Google Scholar

Akçayır, M., and Akçayır, G. (2017). Advantages and challenges associated with augmented reality for education: a systematic review of the literature. Educ. Res. Rev. 20, 1–11. doi: 10.1016/j.edurev.2016.11.002

Crossref Full Text | Google Scholar

Al-Megren, S., and Almutairi, A. (2018). Analysis of user requirements for a mobile augmented reality application to support literacy development amongst hearing-impaired children. J. Inf. Commun. Technol. 18, 97–121. doi: 10.32890/jict2019.18.1.6

Crossref Full Text | Google Scholar

Ariya, P., Yensathit, Y., Thongthip, P., Intawong, K., and Puritat, K. (2024). Assisting hearing and physically impaired students in navigating immersive virtual reality for library orientation. Technologies 13:2. doi: 10.3390/technologies13010002

Crossref Full Text | Google Scholar

Arksey, H., and O’Malley, L. (2005). Scoping studies: towards a methodological framework. Int. J. Soc. Res. Methodol. 8, 19–32. doi: 10.1080/1364557032000119616

Crossref Full Text | Google Scholar

Borna, A., Mousavi, S. Z., Fathollahzadeh, F., Nazeri, A., and Harari, R. E. (2024). Applications of augmented and virtual reality in enhancing communication for individuals who are hard of hearing: a systematic review. Am. J. Audiol. 33, 1378–1394. doi: 10.1044/2024_AJA-24-00056

Crossref Full Text | Google Scholar

Brescia-Zapata, M., Krejtz, K., Duchowski, A. T., Hughes, C. J., and Orero, P. (2025). Subtitles in VR 360° video: results from an eye-tracking experiment. Perspecta 33, 357–379. doi: 10.1080/0907676X.2023.2268122

Crossref Full Text | Google Scholar

CAST (2018). Universal Design for Learning (UDL) guidelines version 2.2 Available online at: https://udlguidelines.cast.org/ (Accessed August 16, 2025).

Google Scholar

Chemnad, K., and Othman, A. (2024). Digital accessibility in the era of artificial intelligence-bibliometric analysis and systematic review. Front. Artif. Intell. 7:1349668. doi: 10.3389/frai.2024.1349668

PubMed Abstract | Crossref Full Text | Google Scholar

De Martino, J. M., Silva, I. R., Bolognini, C. Z., Costa, P. D. P., Kumada, K. M. O., Coradine, L. C., et al. (2017). Signing avatars: making education more inclusive. Univ. Access Inf. Soc. 16, 793–808. doi: 10.1007/s10209-016-0504-x

Crossref Full Text | Google Scholar

Economou, D., Russi, M. G., Doumanis, I., Mentzelopoulos, M., Bouki, V., and Ferguson, J. (2020). Using serious games for learning British sign language combining video, enhanced interactivity, and VR technology. JUCS 26, 996–1016. doi: 10.3897/jucs.2020.053

Crossref Full Text | Google Scholar

Fernandes, N., Leite Junior, A. J. M., Marçal, E., and Viana, W. (2024). Augmented reality in education for people who are deaf or hard of hearing: a systematic literature review. Universal Access Inf. Soc. 23, 1483–1502. doi: 10.1007/s10209-023-00994-z

Crossref Full Text | Google Scholar

Halabi, M., and Harkouss, Y. (2025). Real-time Arabic sign language recognition system using sensory glove and machine learning. Neural Comput. & Applic. 37, 6977–6993. doi: 10.1007/s00521-025-11010-1

Crossref Full Text | Google Scholar

Intawong, K., Khanchai, S., Thongthip, P., Yensathit, Y., and Puritat, K. (2025). Inclusive library services through virtual reality: enhancing access and reducing anxiety for hearing- and physically-impaired students. J. Inf. Sci. doi: 10.1177/09610006251335023

Crossref Full Text | Google Scholar

Kipp, M., Heloir, A., and Nguyen, Q. (2011). Avatares en lengua de señas: Animación y comprensibilidad. En: HH Vilhjálmsson, S. Kopp, S. Marsella, and KR Thórisson (eds.) Agentes virtuales inteligentes. IVA 2011. Lecture notes in computer science, 6895. Springer, Berlín, Heidelberg. doi: 10.1007/978-3-642-23974-8_13

Crossref Full Text | Google Scholar

Kusters, A., De Meulder, M., and O’Brien, D. (2017). Innovations in deaf studies: Critically mapping the field. Oxford: Oxford University Press.

Google Scholar

Levac, D., Colquhoun, H., and O’Brien, K. K. (2010). Scoping studies: advancing the methodology. Implement. Sci. 5:69. doi: 10.1186/1748-5908-5-69

Crossref Full Text | Google Scholar

Li, J., Huang, L., Shah, S., Jones, S. J., Jin, Y., Wang, D., et al. (2023). Signring: continuous American sign language recognition using IMU rings and virtual IMU data. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7, 1–29. doi: 10.1145/3610881

Crossref Full Text | Google Scholar

Meyer, A., Rose, D. H., and Gordon, D. (2014). Universal Design for Learning: Theory and practice. Wakefield, MA, United States: CAST Publishing.

Google Scholar

Novaliendry, D., Budayawan, K., Auvi, R., Fajri, B. R., and Huda, Y. (2023). Design of sign language learning media based on virtual reality. Int. J. Online Biomed. Eng. 19, 111–126. doi: 10.3991/ijoe.v19i16.44671

Crossref Full Text | Google Scholar

OECD (2020). Curriculum analysis of the OECD future of education and skills 2030. Paris: OECD Publishing.

Google Scholar

Oliver, M. (1990). The politics of disablement. New York: Macmillan.

Google Scholar

Oncins, E., Bernabé, R., Montagud, M., and Arnáiz-Uzquiza, V. (2020). Accessible scenic arts and virtual reality: a pilot study with aged people about user preferences when reading subtitles in immersive environments. MonTI 12, 214–241. doi: 10.6035/MonTI.2020.12.07

Crossref Full Text | Google Scholar

Oral, A. Z., and Kalkan, Ö. K. (2025). Deaf and hard of hearing employees: accessible and AR-based training materials. Univ. Access Inf. Soc. 24, 1331–1340. doi: 10.1007/s10209-024-01143-w

Crossref Full Text | Google Scholar

Radianti, J., Majchrzak, T. A., Fromm, J., and Wohlgenannt, I. (2020). A systematic review of immersive virtual reality applications for higher education: design elements, lessons learned, and research agenda. Comput. Educ. 147:103778. doi: 10.1016/j.compedu.2019.103778

Crossref Full Text | Google Scholar

Rum, S. N., and Boilis, B. I. (2021). Sign language communication through augmented reality and speech recognition (LEARNSIGN). Int. J. Eng. Trends Technol. 69, 125–130. doi: 10.14445/22315381/IJETT-V69I4P218

Crossref Full Text | Google Scholar

Shakespeare, T. (2006). Disability rights and wrongs. London: Routledge.

Google Scholar

Tricco, A. C., Lillie, E., Zarin, W., O’Brien, K. K., Colquhoun, H., Levac, D., et al. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann. Intern. Med. 169, 467–473. doi: 10.7326/M18-0850

Crossref Full Text | Google Scholar

World Wide Web Consortium (W3C). (2023). Web content accessibility guidelines (WCAG) 2.2. Available online at: https://www.w3.org/TR/WCAG22/

Google Scholar

WHO (2001). International classification of functioning, disability and health (ICF). Geneva: World Health Organization.

Google Scholar

Wółk, K. (2019). Emergency, pictogram-based augmented reality medical communicator prototype using precise eye-tracking technology. Cyberpsychol. Behav. Soc. Netw. 22, 151–157. doi: 10.1089/cyber.2018.0035

Crossref Full Text | Google Scholar

Wu, B., Yu, X., and Gu, X. (2020). Effectiveness of immersive virtual reality using head-mounted displays on learning performance: a meta-analysis. Br. J. Educ. Technol. 51, 1991–2005. doi: 10.1111/bjet.13023

Crossref Full Text | Google Scholar

Young, A., and Temple, B. (2014). Approaches to social research: the case of deaf studies. Oxford: Oxford University Press.

Google Scholar

Keywords: deaf and hard of hearing, non-formal education, accessibility, inclusion, augmented reality (AR), virtual reality (VR), sign language, immersive learning

Citation: ​González-Afonso MC, Perdomo-López CA, ​Plasencia-Carballo Z and ​Pérez-Jorge D (2026) Inclusive education beyond formal settings: AR and VR as accessibility strategies for deaf and hard-of-hearing individuals. Front. Educ. 10:1715885. doi: 10.3389/feduc.2025.1715885

Received: 29 September 2025; Revised: 31 October 2025; Accepted: 12 November 2025;
Published: 01 January 2026.

Edited by:

Nevine Nizar Zakaria, Julius Maximilian University of Würzburg, Germany

Reviewed by:

Azadeh Borna, Iran University of Medical Sciences, Iran

Copyright © 2026 González-Afonso, Perdomo-López, Plasencia-Carballo and Pérez-Jorge. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: David Pérez-Jorge, ZHBqb3JnZUB1bGwuZWR1LmVz

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.