- 1Mechatronics Department, School of Engineering and Sciences, Tecnologico de Monterrey, Monterrey, Mexico
- 2School of Humanities and Education, Tecnologico de Monterrey, Monterrey, Mexico
The use of immersive technologies in education has shown an improvement in the learning process of students. However, applications of these technologies in the Humanities are limited, since most studies focus on scientific fields. In this study, the Neurohumanities Lab was introduced as a semi-immersive space for teaching the Humanities. Two groups of 12 participants each performed activities under the semi-immersive and traditional classroom set-ups, while recording their physiological signals (electroencephalography, electrodermal activity, and heart rate). In both groups, the ITC-SOPI presence questionnaire was used to compare their differences in perceived presence levels, which showed a higher level in the experimental group. Machine learning algorithms were applied, concluding that the decision tree supervised learning model determined the most relevant features to distinguish between both set-ups with an accuracy of 90%. In the experimental group, an increased heart rate was observed with respect to the control group, while the electrodermal activity increased its peaks in both groups compared to the basal state. Additionally, brain source localization techniques revealed a notorious activation of brain areas related to emotional and somatosensory processing during the semi-immersive experience. Therefore, the Neurohumanities Lab has the potential to be a fully immersive environment for innovative education and enhanced learning.
1 Introduction
The Humanities encompass all aspects of human society and culture, including language, literature, philosophy, law, politics, religion, art, history, and social psychology. They offer insights into human evolution and learning (Carew and Ramaswami, 2020). Educational innovation has predominantly focused on science and engineering, where technological advancements have demonstrably enhanced learning outcomes (Bond et al., 2020). Conversely, the arts and Humanities have experienced comparatively little empirical investigation into novel pedagogical approaches (Hutson and Olsen, 2021). Nevertheless, the Humanities foster critical thinking, empathy, and creativity competencies which are crucial for innovation across all domains (Hutson and Olsen, 2021). Therefore, it is important to improve pedagogical methods to engage students more effectively in disciplines not only related to sciences but also to the Humanities.
The incorporation of technology in education has the potential to enhance learning and engagement in students, although it is crucial to have careful planning and use appropriate tools (Bond et al., 2020; Pradeep et al., 2024). Some of the educational technology tools used are online learning platforms, discussion boards, knowledge organization, videos, digital games and blogs (Bond et al., 2020; Bedenlier et al., 2020; Pradeep et al., 2024). Virtual reality and immersive learning spaces are other innovative educational technologies that show promise in the design of an engaging learning environment that could enhance the learning experience (Collins et al., 2019; Liu et al., 2024). There has been great interest in studying the influence an environment or tool has on a person's cognitive state especially in education (Gramouseni et al., 2023).
Neuroeducation is a topic that has gathered interest in recent years, considering that it seeks to understand how the brain operates in learning environments (Jolles and Jolles, 2021). It is the crossover between education and neuroscience, this field could inform better decision-making related to learning environments, improve learning outcomes, and pedagogical methods (Jolles and Jolles, 2021; Pradeep et al., 2024). Tools have been sought and developed for the evaluation of educational technologies, which have been mainly based on likert-scaled questionnaires that still deliver subjective information. The most widely used questionnaires for this evaluation are closely related to presence, as an indicator of a person's engagement, learning, and emotional reaction to the learning process (Ochs and Sonderegger, 2022). In literature reviews, presence has been studied as an important factor in understanding immersion and the effectiveness of virtual reality environments (Moinnereau et al., 2022; Grassini and Laumann, 2020). Most studies do not use a direct approach in evaluating presence and virtual environments in educational settings, which limits its evaluation and continuous improvement. Moreover, a quantitative measurement of the impact of virtual and immersive environments in educational set-ups may improve educational innovation applications.
Neurotechnology such as Functional Magnetic Resonance (fMRI), Positron Emission Tomography (PET), and Electroencephalography (EEG) have greatly contributed to a better understanding of how the brain operates in learning environments (Pradeep et al., 2024). EEG is one of the most common signals used to obtain important information on neural activity, which has been shown to reflect cognitive performance and emotional functions (Gramouseni et al., 2023; Cardona-Álvarez et al., 2023). Electrodermal activity (EDA) and blood volume pressure (BVP) are other physiological signals that can provide important information on the state of a subject's physiological functions and can be used as a measurement of a subject's mental state (Cosoli et al., 2021; Khedher et al., 2019). EDA has been considered a measure of the sympathetic nervous system, a branch of the autonomic nervous system, as it measures the variations in skin conductance (Cosoli et al., 2021; Goussain et al., 2025). EDA can be decomposed into tonic and phasic components, representing the slow and fast variations of EDA respectively (Veeranki et al., 2024). The phasic component is represented by the skin conductance response (SCR) and is commonly studied due to their correlation with cognitive activity (Moinnereau et al., 2022; Rahma et al., 2022). BVP measures changes of blood volume and has been identified as a tool for assessing engagement in educational environments (Moinnereau et al., 2022; Goussain et al., 2025). By understanding the subject's mental state, the learning systems can be adjusted to enhance learning outcomes.
The biometrics mentioned can evaluate patterns that correlate with sense of presence and engagement. EEG signals are most commonly studied in five frequency bands: delta (0.5–4 Hz), theta (4-8 Hz), alpha (8–12 Hz), beta (12-30 Hz), and gamma (>30 Hz) (Gramouseni et al., 2023). Some studies have assessed specific functions of the cognitive state in virtual reality based on EEG. For instance, a study analyzing a driving simulator found a higher cognitive load and a higher number of brain activation regions in virtual reality environments (Qadir et al., 2019). Another study involving storytelling in virtual reality provided evidence that participants sustained high levels of immersion and engagement noting an increase in alpha and theta band power (Škola et al., 2020). Studies related to EEG measurements and presence in immersive environments show a correlation with low power in all frequency bands in frontal electrodes (Moinnereau et al., 2022). Similarly, a high engagement index (obtained from EEG) indicates a higher immersive perception, and therefore, this neuromarker is a potential indicator of optimized learning in virtual environments (Ochs and Sonderegger, 2022; Moinnereau et al., 2022).
Specifically in the field of the Humanities, some new initiatives include digital Humanities and Neurohumanities, which try to create a significant and meaningful impact to the way these topics are taught today (Carew and Ramaswami, 2020). The interdisciplinary approach of neuroscience and Humanities is important, since it may enable linguistics, music, and emotional analysis to become key components of the understanding of modern cognitive neuroscience (Hartley and Poeppel, 2020). By combining virtual environments, digital Humanities and Neurohumanities, universities worldwide have implemented these ideas in subjects such as Art History and History of Architecture (Hutson and Olsen, 2021). However, they are still in the development stage, and questions such as measuring the real impact and benefit in learning outcomes from these technologies have arisen (Hutson and Olsen, 2021). Humanities studies are subjective, since learning is not quantitatively measurable, and answers to questions are not correct or incorrect as taught in exact sciences related subjects. Therefore, assessing learning outcomes in the Humanities becomes a challenge.
The arising questions and combination of Neuroscience and the Humanities, led to the Neurohumanities Lab (NH Lab) project, which proposes a design for a partially-immersive interactive system for the education of the Humanities. The system can detect movements, voice, facial gestures, emotions, and mental states of the user, with the use of cameras and wearable biometric devices. The emotional state of individuals were identified and analyzed with EEG, EDA, BVP, and temperature signals, along with computer vision techniques. The detected signals were used as a feedback loop to modify the participants environment, such as dynamic visualizations, lighting, and sound, allowing an enhanced environment-participant interaction (Blanco-Rios et al., 2024).
This approach provides individuals with the opportunity to interact with their environment and enhance their learning experience within a classroom setting. The underlying assumption is that by integrating these technologies, the learning experience can be improved in terms of engagement, personalization, multi-sensory stimulation, and overall effectiveness. The aim of the present study is to further evaluate the effectiveness of the NHLab design, by analyzing the demonstrated sense of presence and engagement in two groups of higher education students while completing four different activities, one group in a controlled traditional classroom setup and the other in the partially-immersive environment. Additionally, we aim to find which features in EEG signals can be used to differentiate between the experimental and control groups and whether EDA signals and heart rate variability can be used to determine which group experienced higher presence levels.
2 Methodology
2.1 Participants
The methodolog y for this research builds upon previous work (Romo-De León et al., 2024), in which the data collection process is described. The Dataset comprises a total of 24 Spanish-speaking adults between 18 and 25 years of age who were not undergoing medical treatment for any mental condition, which served as the inclusion criteria. All participants provided informed consent, confirming the approval of this study by the institution's ethics committee, the “Comité Institucional de Ética en Investigación (CIEI) del Instituto Tecnológico y de Estudios Superiores de Monterrey,” under the identification code EHE-2023-03.
2.2 Experimental protocol
Participants were divided into two groups: 12 participants in the control group and 12 participants in the experimental group. The objective of the experimental design was for both groups to complete four similar tasks, adapted to their respective set-ups: a traditional classroom and the NHLab. Tasks in both set-ups were based on a humanistic topic related to the book The Passions of the Soul by Descartes, an influential philosopher who contributed to the early conceptualization of emotions as they are understood today. Participants selected one of six Passions described by Descartes to represent and keep in mind throughout the tasks: admiration, love, hate, desire, joy, or sadness (Descartes, 2010).
A total of four questionnaires were completed by participants. Two were administered at the beginning of the experiment: the General Health Questionnaire (GHQ) and the Trait Meta-Mood Scale (TMMS-24) for emotional intelligence. The Self-Assessment Manikin (SAM) questionnaire was completed between each task, and the ITC-SOPI Presence questionnaire was administered at the end of the four tasks (Goldberg and Hillier, 1979; Bradley and Lang, 1994; Lessiter et al., 2001; Salovey et al., 1995). Biometric signal recording was conducted throughout the four tasks in both set-ups. Participants wore the OpenBCI Ultracortex IV helmet with eight electrode configurations (Fp1, Fp2, C3, C4, P7, P8, O1, O2), following the 10-20 International System for EEG recordings (OpenBCI, 2023). Additionally, an Empatica E4 wristband was used to record blood volume pulse (BVP), interbeat interval (IBI), electrodermal activity (EDA), and temperature (Empatica, n.d.). The overall methodology is illustrated in Figure 1.
Figure 1. Methodology overview of the experimental protocol used for both groups. Diagram shows the process from start (top left) to end (bottom right).
2.2.1 Scene description
During the experiment, the tasks were referred to as scenes, and each scene consisted of a 3-minute task. The experimental group was introduced to the NHLab, whereas the control group remained in a controlled classroom set-up consisting of a chair and a desk.
First scene. In the first scene (experimental group), participants were asked to portray their chosen emotion through corporal expression, which could include moving their arms and legs or walking within the NHLab experimental space. Participants were able to see their movements represented on a screen as digital brushstrokes of different colors. Using the physiological data collected, the system predicted the participant's emotional state in real time and accordingly changed the color of the brush (Blanco-Rios et al., 2024). The detected emotion also triggered changes in various aspects of the environment, including lighting color, sound, and visual elements projected on the screen. An image of the described environment is shown in Figure 2.
Figure 2. Picture of a participant of the experimental group interacting with NHLab immersive environment representing “Desire” during the first scene.
In the control group, participants were asked to portray the selected emotion through an abstract drawing. They were provided with paper, colored markers, and colored pencils, and were instructed to draw the chosen passion as they perceived and experienced it. Figure 3 presents an example of an abstract drawing created by one participant to represent “joy.” In contrast to the experimental group, this group did not receive any feedback from the system, as the environment was configured to resemble a standard classroom. Nevertheless, data from the prediction model were recorded and stored across all scenes.
Figure 3. Abstract drawing representing “Joy” made by a control group participant during the first scene.
Second scene. In the second scene (experimental group), a word cloud associated with the selected passion was projected within the NHLab environment, as shown in Figure 4. Participants were asked to verbally state, using a microphone, the words that best represented their chosen passion. They were instructed to select words from the displayed word cloud or to propose new words, which were subsequently detected by the system and incorporated into the cloud.
Figure 4. Word selection screen from the passion “sadness”, displayed to a participant during the second scene of the experimental group. Words in the screen read “terror”, “fear”, “horror”, “request”, and “sympathy”.
For the control group, participants were provided with a printed word cloud associated with their chosen passion and were asked to encircle or write below the words they associated with the emotion as experienced during the first scene. Alternatively, they were instructed to add any missing words they felt better represented that emotion. It should be noted that these sheets were identical for all participants for each emotion and did not vary across individuals, unlike in the experimental group. An example of the activity sheet is shown in Figure 5.
Figure 5. Word Selection associated with “Admiration” made by a participant in the control group during the second scene. The words encircled were “sympathy”, “delight”, “amazement” and “solemnity”, and the ones written down were “conservation”, “chaining”, “admiration”, “similarity” and “aspiration”.
Third scene. In the third scene (experimental group), literary fragments were projected onto the screen. These short fragments, consisting of one or two sentences, were related to the words selected by the participant during the second scene. Participants were asked to read the fragments aloud into a microphone in the most emphatic and comprehensible manner possible. Figure 6 illustrates how the NHLab environment changed to display the literary fragments on the screen.
Figure 6. Literary fragment reading from a participant in the experimental group during the third scene.
For the control group, participants were provided with printed literary fragments, as shown in Figure 7. They were asked to read the fragments aloud and highlight the words that captured their attention. As in the previous scene, these sheets were identical for each emotion and did not vary across participants.
Figure 7. Literary fragment reading with the passion “Desire” from a participant in the control group during the third scene. Fragments included from author David Hume and Michel de Montaige. Some of the highlighted words were “the vision”, “pleasant”, “change”, “appetite”, “aversion”, “hope” and “inclination”.
Fourth scene. During the final scene, participants in the experimental group were seated while a camera captured and recognized their facial features. A digital Vanitas painting was projected onto the screen, and participants were asked to observe it and explore the emotions it elicited through facial interaction. The sand clock and the flower within the painting changed in response to the detected facial expressions, as illustrated in Figure 8. Moreover, the skull in the center was designed to replicate the participants' head, eye, and mouth movements.
Figure 8. Participant in the experimental group reacting facially to the digital painting, during the fourth scene.
In contrast, participants in the control group were presented with a printed version of the painting and were asked to write a description of the feelings they experienced while observing it. Figure 9 shows an example of a description written by one participant in the control group. The same painting was used for all participants in both groups.
Figure 9. Descriptions from a painting from a participant in the control group. These translate to “anguish”, “pressure”, “it's something beautiful and deep that involves life's time”, “ephemeral”, “fear”, “life and death”, “light and darkness” and “connection”.
The purposes of the scenes follow a constructivist Kantian framework of cognitive functions, ranging from physical processes (intuition), to conceptual processes (understanding), discursive processes (reason), and judgment (imagination combined with reason) (Danielyan, 2023). Within this framework, each scene incorporates specific measurements that complement the biometric analysis conducted. These include observing the quantity and variety of movements proposed in the first scene; analyzing the emotional charge, number, and diversity of the words suggested in the second scene; examining tone of voice, as well as variations in speed and intensity, during the readings in the third scene; and assessing the diversity of emotions elicited–both from EEG recordings and facial emotion recognition–and comparing these sequences in the fourth scene.
2.3 Data analysis
2.3.1 Data pre-processing
The EEG signals acquired from the OpenBCI cap, along with the EDA and BVP signals recorded using the Empatica E4 wristband, were used for further analysis and comparison between groups.
EEG signals were collected using custom Python scripts at a sampling rate of 250 Hz. Both earlobes were used as reference points, and data were recorded from eight channels–Fp1, Fp2, C3, C4, P7, P8, O1, and O2–following the standard 10–20 International System. The data were preprocessed as described in Romo-De León et al. (2024). During preprocessing, the MATLAB PREP Pipeline plug-in was employed, which detrends the signal, removes line noise without relying on a fixed filtering method, and re-references the signal relative to an estimate of the “true” average reference. Specifically, line noise at 60 Hz was removed using a CleanLine approach with a scan bandwidth of ± 2 Hz around the target frequency and a taper window size of 4 seconds with a 1-second step. All channels were included in both the referencing and noise removal procedures (Bigdely-Shamlo et al., 2015; The MathWorks, n.d.b).
Following this step, a 4th order, Butterworth 0.1–50 Hz bandpass filter was applied to remove baseline drifts while preserving relevant neural frequency components. Subsequently, two artifact removal methods were applied: Artifact Subspace Reconstruction (ASR) and Independent Component Analysis (ICA). ASR was performed using a burst detection threshold of 15 standard deviations, while flatline, correlation, and line noise thresholds were disabled. Bad channels were iteratively interpolated until no channels were classified as bad (Chang et al., 2018). ICA was then conducted using the runica algorithm. Due to the use of eight EEG channels, Principal Component Analysis (PCA) was applied to reduce the data to eight dimensions prior to ICA, ensuring compatibility with the dataset size (Debener et al., 2010). Components exhibiting artifact-like spatial, temporal, and spectral characteristics, and identified as artifacts by the ICLABEL plug-in, were removed. The overall preprocessing procedure is illustrated in Figure 10.
Figure 10. EEG pre-processing steps (Romo-De León et al., 2024).
For EDA data analysis, the cvxEDA MATLAB application was used (Greco et al., 2016a). This application decomposes the signal into tonic and phasic components, as well as an additive noise term, using a convex optimization approach (Greco et al., 2016a). As this method explicitly accounts for noise and artifacts, no additional preprocessing steps were applied. The BVP signal was also used for further analysis and was analyzed in its raw form.
2.3.2 EEG data analysis
After preprocessing, the cleaned EEG signals were used for further analysis using machine learning (ML) algorithms (Alzubi et al., 2018). One of the primary research questions was to determine which differences in participants' EEG signals could be distinguished when comparing the two learning experiences: the traditional classroom set-up and the partially immersive set-up. Accordingly, feature extraction and ML techniques were employed to identify which features most effectively supported the discrimination between these two conditions (Moinnereau et al., 2022; Grassini and Laumann, 2020).
Feature matrices were built to evaluate classification accuracy of multiple ML models. Each feature matrix comprised EEG frequency-domain features, including total power and power spectral density (PSD) across all frequency bands—delta (1–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–30 Hz), and gamma (30–100 Hz)—for all eight electrodes, as well as the engagement index ratio derived from the frontal electrodes (Fp1 and Fp2), as shown in Equation 1.
These calculations were performed on a per-second basis for each recording, participant, and scene. Several review studies have reported the use of frequency-domain EEG analysis to distinguish and characterize immersive and non-immersive technological experiences (Moinnereau et al., 2022; Grassini and Laumann, 2020). Custom MATLAB code was developed to compute the features and construct a feature matrix (160 × 83; seconds per scene × features and target vector) containing the estimated values. Band power was calculated using MATLAB's bandpower function (The MathWorks, n.d.a), and power spectral density (PSD) was estimated using Welch's method via the pwelch function (The MathWorks, n.d.d). Both band power and PSD have been associated with presence and engagement, as they may increase or decrease depending on the level of immersion (Moinnereau et al., 2022; Grassini and Laumann, 2020).
The minimum redundancy maximum relevance (MRMR) algorithm was applied to identify the most informative predictors from the feature matrices (The MathWorks, n.d.c). A separate feature matrix was constructed for each scene to enable comparison of EEG features between the two groups. For each feature matrix, data from 70% of the participants were used for training, while data from the remaining 30% were used for testing. Three ML algorithms were evaluated: decision tree, linear discriminant analysis, and quadratic discriminant analysis. Five-fold cross-validation was performed across all three models to prevent overestimation, with participants randomly selected at each iteration. Figure 11 presents a visual flowchart of the procedure used to identify the best-performing ML model.
Pre-processed EEG signals were analyzed using dipole fitting to identify dipole representations of brain sources based on the ICA method (EEGLAB, n.d.b). Subsequently, neighboring dipoles were grouped into clusters; the resulting centroids were then located and classified within specific Brodmann Areas (Guy-Evans, n.d.). This analysis was performed using the DIPFIT plugin for MATLAB (EEGLAB, n.d.a).
To determine the optimal number of clusters, we applied an optimization approach suggesting between 5 and 15 clusters based on three specific algorithms (Calinski-Harabasz, Silhouette). Dipoles were grouped by experimental condition, yielding distinct clusters for the Control and Experimental groups for each scene. The dipole fitting workflow is illustrated in Figure 12.
2.3.3 EDA data analysis
The cvxEDA model was applied to the raw EDA signals to extract the phasic component, utilizing default parameters for all subjects (Greco et al., 2016a). From this component, the SCR count and phasic amplitude were extracted as primary features. In accordance with established protocols, a minimum threshold of 0.05 μS was set to identify significant SCRs (Posada-Quintero and Chon, 2020; Greco et al., 2016b). This procedure was applied to all subjects across all scenes, including the baseline. Following extraction, the mean SCR count and amplitude were calculated for both the Control and Experimental groups.
2.3.4 BVP data analysis
Heart rate (HR) was derived from the BVP signals for each subject. Using MATLAB's findpeaks command, we identified the signal's local maxima. To prevent heart rate calculation errors caused by the dicrotic notch, only the secondary peak was considered for each identified maximum (Peper et al., 2007). The time intervals between successive peaks were calculated and converted to an estimated heart rate by dividing by 60 s.
3 Results and discussion
3.1 EEG analysis
3.1.1 Machine learning
The MRMR algorithm was employed to rank the feature matrices and identify those that most effectively distinguished between the control and experimental classes. Table 1 presents the top 10 features identified for each scene. Across all scenes, the engagement level of one or both electrodes consistently emerged as a key differentiator. This aligns with existing literature suggesting that engagement is a critical factor for quantifying immersion and correlates strongly with the level of presence (Bond et al., 2020; Debener et al., 2010). Furthermore, because the engagement feature was ranked as highly relevant by the algorithm, it may serve as a potential neuromarker for evaluating the knowledge or understanding acquired by each group. High levels of engagement may indirectly indicate superior learning outcomes (Ochs and Sonderegger, 2022).
The engagement index values were averaged across scenes and groups, and are are illustrated in Figure 13. In Scene 1, engagement values exhibited a sharp contrast, with the experimental group demonstrating significantly higher engagement. Conversely, during Scenes 2 and 3, the control group showed higher engagement levels, suggesting that these specific activities may have been more engaging within a traditional classroom setting. Finally, in Scene 4, the data again revealed higher engagement within the experimental group.
Machine learning (ML) was employed to develop models capable of identifying the most relevant biometric patterns that distinguish between the control and immersive groups. The accuracy levels for each scene were evaluated using three supervised methods: Decision Tree, Linear Discriminant Analysis, and Quadratic Discriminant Analysis. A comparison of these models across scenes is illustrated in Figure 14. The y-axis depicts the average accuracy of each model (across cross-validations) relative to the increasing number of features on the x-axis.
Figure 14. Machine Learning Models performance across all four scenes. Model accuracy was evaluated considering an increasing amount of features, using five cross-validations per model. Thick line and shaded area represent mean and standard deviation per model accuracy respectively.
Accuracy is reported for both testing and training datasets; colored, thick lines represent the mean accuracy, while the shaded areas indicate the standard deviation across cross-validations. From the 10-feature threshold onward, the Decision Tree model demonstrates the highest performance, reaching nearly 90% accuracy and maintaining consistent results as the number of features increases.
In Table 2, the maximum accuracy achieved and the corresponding number of features can be observed for each scene and each Machine Learning model.
Table 3 presents the average accuracy for each model using a 10-feature set. This threshold was selected because the Decision Tree model's average accuracy stabilizes at approximately 90% beyond this point. In contrast, the Linear and Quadratic Discriminant Analysis methods show consistently lower accuracy values, even when utilizing the same 10 features.
3.1.2 Dipole fitting
Dipole fitting enables the 3D localization of independent components (brain sources) within specific regions of the cerebral cortex. In this analysis, subjects were categorized into control and experimental groups across four distinct scenes. This allowed for the identification of active brain sources for both the control and immersive groups throughout Scenes 1 to 4. Only independent components with DIPFIT residual variance below 20% were retained for source localization and clustering analyses.
Following the clustering of these dipoles, the resulting clusters were mapped to the Montreal Neurological Institute (MNI) template to identify their corresponding Brodmann Areas (BAs). Table 4 summarizes the BAs activated during each scene for both the control and experimental groups (Akalin Acar and Makeig, 2013).
In Scene 1, three clusters were identified in both the control and experimental groups. The first cluster was located in BA 10 (Anterior prefrontal cortex), an area involved in higher cognitive functions such as planning and task management. This activity likely reflects the cognitive demands of creating an abstract painting, which requires a degree of strategic planning. Additionally, clusters were identified in BA 18 and BA 19 [Secondary Visual Cortex (V2) and Associative Visual Cortex (V3, V4 & V5)], which are associated with receiving and analyzing complex visual information. These findings directly correlate with the visual tasks performed by both groups. Notably, the experimental group also exhibited activation in BA 7 (Somatosensory Association Cortex). This activation is associated with processing sensory stimuli and may reflect the multi-sensory experience of the immersive space, as participants visualize themselves within the projection. These clusters are visualized in Figure 15.
Figure 15. Brodmann Areas found during the dipole fitting and clustering analysis for Scene 1 (experimental and control groups). Colored boxes show unique BA activations in one particular group.
In Scene 2, observations were similar to Scene 1, with clusters identified in BAs 10 and 18 for both groups. However, the experimental group uniquely exhibited clusters in BA 9 (Dorsolateral prefrontal cortex) and BA 40 (Supramarginal gyrus). These areas are associated with executive functions—including memory and attention—as well as phonological processing and emotional responses. This activation likely reflects the cognitive demands of recalling and articulating words related to Descartes' passions. In contrast, the control group showed a cluster in BA 37 (Fusiform gyrus), which is linked to high-level visual processing such as word recognition. This may correspond to the participants' interaction with the physical word list and the associated reading comprehension tasks. The dipole clustering for Scene 2 is illustrated in Figure 16.
Figure 16. Brodmann Areas found during the dipole fitting and clustering analysis for Scene 2 (experimental and control groups). Colored boxes show unique BA activations in one particular group.
Similar to the previous scenes, Scene 3 exhibited clusters of brain activity in BAs 10, 18, and 19 for both groups (Figure 17). In the experimental group, activation was uniquely observed in BA 1 (Primary somatosensory cortex), which is responsible for processing somatic sensations. This activation may be linked to the subjects observing recordings of their own movements from Scene 1, potentially eliciting a higher level of proprioception. Conversely, the control group showed a cluster in BA 6 (Premotor and Supplementary Motor Cortex), an area associated with movement planning and control. This activation likely reflects the physical task requirements in the control condition, where participants manually selected literary quotes, whereas the experimental group read the quotes statically as they were projected.
Figure 17. Brodmann Areas found during the dipole fitting and clustering analysis for Scene 3 (experimental and control groups). Colored boxes show unique BA activations in one particular group.
Finally, in Scene 4 (Figure 18), clusters were identified in both groups within BA 6, BA 10, and BA 18 (the premotor cortex, anterior prefrontal cortex, and secondary visual cortex, respectively). Notably, the control group uniquely exhibited a cluster in BA 19 (Associative Visual Cortex). This area is responsible for processing complex visual information and likely reflects the cognitive demands associated with interpreting the specific painting presented to the control group in this scene.
Figure 18. Brodmann Areas found during the dipole fitting and clustering analysis for Scene 4 (experimental and control groups). Colored boxes show unique BA activations in one particular group.
3.2 EDA analysis
The average SCR for both groups during each scene is illustrated in Figure 19. Notably, two EDA recordings from the experimental group were lost due to data collection errors: subject 13 during Scene 1 and subject 23 during Scene 2. During the baseline, most subjects exhibited no SCRs, with the exception of four individuals. In the experimental group, two subjects registered a total of 13 SCRs, while in the control group, two subjects registered 1 and 4 SCRs, respectively. The low variability of EDA signals in a relaxed state likely explains the increased SCR frequency during experimental scenes compared to the baseline (Cosoli et al., 2021). Furthermore, higher SCR counts have been associated with high-level presence in specific environments (Moinnereau et al., 2022). Overall, a higher SCR count was observed in the experimental group, although both groups exhibited considerable standard deviation. The most prominent increases occurred during Scenes 1 and 3. In Scene 1, the control group averaged 9.5 ± 8.91 SCRs, while the experimental group averaged 12.75 ± 12.71. In Scene 3, the control group averaged 5.92 ± 7.76 compared to 11.25 ± 11.11 in the experimental group. The average phasic amplitude is presented in Figure 20. Notably, the experimental group exhibited a larger average amplitude than the control group across all scenes. The highest average phasic amplitude was recorded in the experimental group during Scene 1 (0.50 ± 0.89 μS). In contrast, the control group's largest average amplitude occurred during Scene 3 (0.11 ± 0.23 μS).
Figure 19. Average amount of SCR of EDA in basal state and scenes separated by control (left) and experimental (right) groups.
3.3 HR analysis
As illustrated in Figure 21, the heart rate increased considerably more in the control group than in the experimental group. Existing literature suggests that a lower heart rate is associated with a greater sense of presence among participants (Moinnereau et al., 2022). While the heart rate did not drop below baseline values at any point, the relative difference between the two groups provides a basis for interpreting the results. The smaller increase in heart rate observed in the experimental group during Scenes 1, 2, and 4 suggests these participants experienced a higher level of presence than their counterparts in the control group (Grassini and Laumann, 2020). Specifically, tasks involving the expression of Descartes' passions, verbalizing passion-related words, and viewing paintings with facial recognition elicited a greater sense of presence than the control activities of abstract drawing, writing words, and manual painting analysis.
During Scene 3, participants in the control group demonstrated a higher level of presence than those in the experimental group. This suggests that reading literary quotes aloud and actively selecting words related to Descartes' passions elicited a greater sense of presence than the experimental condition, where participants read the quotes without a selection task. The average BVP amplitudes for each condition are illustrated in Figure 22.
3.4 Questionnaire statistical results
The ITC-SOPI questionnaire was employed to collect qualitative data regarding the participants' sense of presence within the virtual environment. This instrument evaluates four distinct dimensions to compare various aspects of the user experience (Lessiter et al., 2001). Figure 23 presents the average scores obtained across each of these four categories.
Figure 23. Box plot of statistical results from ITC-SOPI Presence Questionnaire. Likert scale was used (1 = strongly disagree, 5 = strongly agree).
Data normality for the ITC-SOPI dimensions was assessed using the Ryan-Joiner test in Minitab, which is particularly suitable for small sample sizes. The results indicated that all four dimensions were approximately normally distributed: Negative Effects (RJ = 0.991), Spatial Presence (RJ = 0.967), Engagement (RJ = 0.988), and Ecological Naturalness (RJ = 0.991). Based on these findings, parametric analyses were deemed appropriate. A one-way ANOVA was subsequently performed using a significance level of α = 0.05 to determine if significant differences existed between the groups. Table 5 presents the resulting p-values and eta-squared (η2) values, which measure the effect size and indicate the proportion of total variance attributable to the group effect. As defined by Becker (1999), η2 is calculated by dividing the effect variance by the total variance. The analysis revealed a significant effect of group on spatial presence, [F(1, 22) = 7.63, p = 0.011], indicating that spatial presence differed significantly between the two conditions.
Table 5. Comparison of p-level and effect sizes with eta squared of the different categories of the ITC-SOPI questionnaire.
Analysis of the ITC-SOPI categories revealed no significant differences between the two groups regarding engagement and negative effects. However, significant differences were observed in spatial presence and ecological validity. Specifically, spatial presence was significantly higher in the immersive group, whereas ecological validity was significantly lower compared to the control group. Notably, when considering effect sizes, ecological validity exhibited a stronger effect than spatial presence. According to the literature, lower p-values for spatial presence indicate a statistically significant and higher quality of immersion within the experimental setup. Conversely, higher scores in the ecological validity (naturalness) category reflect how realistically the experiment was perceived and how closely it resembles real-world experiences (Grassini and Laumann, 2020). These results suggest that while the experimental group experienced a higher quality of immersion and spatial presence, the lower ecological validity scores indicate the experience was perceived as innovative and unique rather than a direct simulation of everyday reality.
3.5 Content results
In addition to neurophysiological measures, various strategies were implemented to evaluate educational impact beyond mere presence. During the second scene of both experiences, words proposed by each participant in relation to their experience and understanding of each studied passion were collected. The number of unique words suggested, along with their diversity in correlation with a specific corpus, serves as an indicator of the participants' vocabulary richness. This comparison is presented in Figure 24, where the number of words suggested by participants in the control group is contrasted with those in the immersive group. Regarding the words selected, a greater quantity and variety were observed in the immersive group compared to the control group. Furthermore, the selected words were more closely aligned with the specific passions chosen in each case; specifically, words were proposed in the immersive group that showed a higher correlation with the selected passion.
Figure 24. Proposed words (in Spanish) by selected passion. The size of the labels is related to the correlation that each word has with the corresponding passion within the corpus; the opaquer words were suggested by the experiment meanwhile the more transparent words were proposed by the participants.
During scene four, the participants' reactions to a Vanitas artwork were analyzed. The number and variety of detected facial emotions served as indicators of the capacity to process diverse aesthetic reactions to the same painting. A greater diversity in captured emotions suggests an enhanced ability to appreciate and respond to complex artistic stimuli. Compared with the control group, a higher variety of emotions was experienced by the immersive group, with a mean of 5.5 emotions per participant compared to 2.5 emotions per participant in the control group. In the immersive group, the most frequently detected emotion was neutral, followed by fear, sadness, happiness, anger, surprise, and disgust. Similarly, for the control group, neutral was the most expressed emotion, followed by fear, surprise, happiness, anger, disgust, and sadness.
4 Limitations
Different limitations of this study should be acknowledged. First, the relatively small sample size may limit the generalization of the findings, particularly across different age groups. It is noted, however, that previous research has established heart rate and EEG signals as reliable indicators of engagement in virtual environments using a sample of 21 (Murphy and Higgins, 2019). Similarly, a significant positive impact of engagement on student outcomes was identified using a sample of 15 (Khedher et al., 2019). While these sample sizes are comparable to the one employed in the present study, a larger and more diverse participant pool is necessary for future research to further validate these findings. A significant drawback involved the loss of data from the Empatica E4 device, which necessitated the exclusion of three recordings from the experimental group. In future studies, strategies should be incorporated to minimize such data loss. Furthermore, while the experiment relied heavily on quantitative physiological signals, qualitative data were obtained via questionnaires, which may be susceptible to individual biases. Additionally, participants interacted with the environment only once, preventing conclusions regarding the long-term stability of the observed results. Further experiments involving repetitive use of the environment would be beneficial to determine if results remain consistent over continued exposure, particularly for therapeutic applications. A final limitation concerns the estimation of brain activations via dipole modeling, as the number of electrodes in the headset was relatively small (8). It is encouraged that readers consider the fact that a reduction in the number of electrodes decreases the accuracy of source location estimations (Soler et al., 2020; Bai and He, 2005). To compensate for this, participants were combined within their respective groups to enhance the accuracy of the estimations during cluster localization.
5 Conclusions
With the obtained results, it can be suggested that physiological signals such as EEG, EDA and HR may present a quantitative manner to measure the presence and engagement of a student within a partially immersive space. An increase in emotional activations, presence, and engagement was associated with participants in the partially immersive environment which shows that it could potentially be used as a technological tool to improve the learning experience and understanding of the Humanities. However, the field of study is broad and there are still limitations in the research that must be taken into account for future analysis.
Regarding EEG analysis, the calculation of features for the ML models supports the idea that the concepts of presence and engagement can be important factors in understanding the physiological changes of students in different educational environments. For instance, the ratio between beta over the sum of alpha and beta, related to engagement, and the behavior of the gamma band in the frontal electrodes were part of the features found with maximum relevance for classification. The selection of such features is in accordance to what has been previously mentioned in the literature, which confirms the cognitive processes that were present during the proposed experiences.
Furthermore, the high accuracy of the obtained ML models (close to 90%) may suggest that there is a notorious difference between the brain parameters that are present during the control, and immersive experiences.
From the dipole fitting analysis, it was observed that similar brain activations were found at both groups, specifically in the areas responsible for visual processing (10, 18 and 19), which goes hand in hand with the activities carried out during the experience. However, particular brain activations were found for specific Scenes and groups. For instance, the activation of brain areas related to somatosensory sensations (1 and 7) present in Scenes 1 and 3 from the experimental group. This activation may indicate the efficiency of the multisensory feedback provided in the partially immersive experience, which reflected the impact on the participants' reception of sensorial information and stimuli. Similarly, a cluster was observed in BA 40 during Scene 2 in the experimental group, which is related to emotional perception and processing. This results reflect the higher sensorial and emotional content in the immersive experience when compared to the more traditional experience.
Regarding the HR analysis, it was observed that there was less variability in the experimental group in terms of the basal state and the scenes, and that the results from the experimental group suggest a higher sense of presence. In the EDA analysis, the number of SCR peaks increased in both groups compared to their respective basal state. The average quantity of SCR peaks was very similar for both groups, noting that the average amount was slightly bigger in the control group. It is important to note, that there is great variability in the EDA signal between subjects, therefore there is great variation within both groups as shown by the high standard deviation. For further and more extensive research, it could be helpful to study the variation of EDA for each subject.
The results found from the analysis of the collected signals contribute preliminary evidence to support a better understanding of teaching in the Humanities and the effectiveness of the use of immersive technologies in learning. In the future, it is expected that the use of this first designed space can be scaled to a completely immersive version. Another interesting possibility is the implementation of the emotion detection algorithm into medical-clinical applications, for instance as assistive therapy tool for patients with mental disorders, or reduced emotional intelligence.
Nevertheless, ethical considerations in further uses such as full immersion and therapeutic uses must be taken into consideration. Although the study was conducted according to the Institution's ethics committee including informed consents, the data that can be obtained from further studies would be sensible and could be at risk of being manipulated, changing the main purpose of the project. Biometric data such as EEG signals, heart rate, movements and even emotions detected are personal, hence anonymization must be essential when publishing this data. Furthermore, if this data is used for medical applications such as therapy, data becomes more sensible, since disorders and diagnosis must be concatenated to the patient or user as well as their own data recorded. Misinterpretation or altering of these signals could eventually be harmful for patients. For this reason, biometric data for further studies regarding diagnosis and therapy must follow stricter regulations such as encryption, and controlled access.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://doi.org/10.6084/m9.figshare.24777084.
Ethics statement
The studies involving humans were approved by Comité Institucional de Ética en Investigación (CIEI), Instituto Tecnológico y de Estudios Superiores de Monterrey. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
RR-D: Data curation, Writing – original draft, Methodology, Formal analysis, Investigation, Resources, Validation. MC-P: Methodology, Writing – original draft, Conceptualization, Investigation, Validation, Formal analysis, Data curation, Resources. VE-V: Resources, Formal analysis, Writing – original draft, Data curation, Methodology, Investigation, Validation. AV-V: Formal analysis, Methodology, Writing – original draft, Data curation, Resources, Investigation, Validation. AO-E: Software, Writing – review & editing, Validation, Methodology. CV-S: Project administration, Validation, Formal analysis, Writing – review & editing, Conceptualization, Methodology, Investigation. JL-S: Conceptualization, Project administration, Supervision, Writing – review & editing, Resources, Funding acquisition. MC-L: Conceptualization, Methodology, Investigation, Writing – review & editing, Supervision, Resources, Data curation, Project administration, Funding acquisition, Formal analysis. MR-M: Project administration, Investigation, Supervision, Methodology, Validation, Data curation, Writing – review & editing, Formal analysis, Visualization, Resources, Conceptualization.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This project received funding from Tecnologico de Monterrey, via the Challege Based Research Funding Program 2022, Project ID: E061 - EHE-GI02 - D-T3 - E; and CONAHCTY's Ciencia de Frontera 2023 grant, Project ID: CF-2023-G-583.
Acknowledgments
Authors would like to acknowledge the support of Miguel Blanco Ríos and Milton Osiel Candela Leal, as well as the International IUCRC BRAIN Affiliate Site at Tecnologico de Monterrey.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was used in the creation of this manuscript. Generative AI was used to improve the readability of some sections of the manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Akalin Acar, Z., and Makeig, S. (2013). Effects of forward model errors on EEG source localization. Brain Topogr. 26, 378–396. doi: 10.1007/s10548-012-0274-6
Alzubi, J., Nayyar, A., and Kumar, A. (2018). Machine learning from theory to algorithms: an overview. J. Phys.: Conf. Series 1142:012012. doi: 10.1088/1742-6596/1142/1/012012
Bai, X., and He, B. (2005). On the estimation of the number of dipole sources in EEG source localization. Clini. Neurophysiol. 116, 2037–2043. doi: 10.1016/j.clinph.2005.06.001
Bedenlier, S., Bond, M., Buntins, K., Zawacki-Richter, O., and Kerres, M. (2020). Facilitating student engagement through educational technology in higher education: a systematic review in the field of arts and humanities. Austral. J. Educ. Technol. 126–150. doi: 10.14742/ajet.5477
Bigdely-Shamlo, N., Mullen, T., Kothe, C., Su, K.-M., and Robbins, K. A. (2015). The PREP pipeline: standardized preprocessing for large-scale EEG analysis. Front. Neuroinform. 9:16. doi: 10.3389/fninf.2015.00016
Blanco-Rios, M. A., Candela-Leal, M. O., Orozco-Romo, C., Remis-Serna, P., Velez-Saboya, C. S., Lozoya-Santos, J. D.-J., et al. (2024). Real-time EEG-based emotion recognition model using principal component analysis and tree-based models for neurohumanities. arXiv [preprint] arXiv:2401.15743. doi: 10.3389/fnhum.2024.1319574
Bond, M., Buntins, K., Bedenlier, S., Zawacki-Richter, O., and Kerres, M. (2020). Mapping research in student engagement and educational technology in higher education: a systematic evidence map. Int. J. Educ. Technol. Higher Educ. 17:2. doi: 10.1186/s41239-019-0176-8
Bradley, M. M., and Lang, P. J. (1994). Measuring emotion: The self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 25:49–59. doi: 10.1016/0005-7916(94)90063-9
Calinski-Harabasz, Silhouette, Davies-Bouldin, & for Computational Neuroscience, S. C. (n.d.). Makoto's useful EEGLAB code- SCCN. Available online at: https://sccn.ucsd.edu/wiki/Makoto%27s_useful_optimum_number_of_clusters.2812.2F19.2F2021EEGLABcode#How_updated.29 (Accessed November 18, 2023).
Cardona-Álvarez, Y. N., Álvarez Meza, A. M., Cárdenas-Peña, D. A., Castaño-Duque, G. A., and Castellanos-Dominguez, G. (2023). A Novel OpenBCI Framework for EEG-Based Neurophysiological Experiments. Sensors 23:3763. doi: 10.3390/s23073763
Carew, T. J., and Ramaswami, M. (2020). The neurohumanities: an emerging partnership for exploring the human experience. Neuron 108, 590–593. doi: 10.1016/j.neuron.2020.10.019
Chang, C.-Y., Hsu, S.-H., Pion-Tonachini, L., and Jung, T.-P. (2018). “Evaluation of artifact subspace reconstruction for automatic EEG artifact removal,” in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (Honolulu, HI: IEEE), 1242–1245.
Collins, J., Regenbrecht, H., Langlotz, T., Said Can, Y., Ersoy, C., and Butson, R. (2019). “Measuring cognitive load and insight: a methodology exemplified in a virtual reality learning context,” in 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (Beijing: IEEE), 351–362.
Cosoli, G., Poli, A., Scalise, L., and Spinsante, S. (2021). Measurement of multimodal physiological signals for stimulation detection by wearable devices. Measurement 184:109966. doi: 10.1016/j.measurement.2021.109966
Danielyan, N. (2023). Immanuel Kant as the first epistemological constructivist. SHS Web of Conf. 161:07004. doi: 10.1051/shsconf/202316107004
Debener, S., Thorne, J., Schneider, T. R., and Viola, F. C. (2010). “3.1 Using ICA for the analysis of multi-channel EEG data,” in Simultaneous EEG and fMRI, eds. M. Ullsperger, and S. Debener (New York: Oxford University Press), 121.
EEGLAB (n.d.a). Back to Basics - A. Head Model. Available online at: https://eeglab.org/tutorials/09_source/Model_Settings.html (Accessed November 18 2023).
EEGLAB (n.d.b). Equivalent Dipole Source Localization of Independent Components - B. Indep. Comp. Sources. Available online at: https://eeglab.org/tutorials/09source/_DIPFIT_html (Accessed November 18 2023).
Empatica (n.d.). Empatica E4 User Manual. Available online at: https://empatica.app.box.com/v/E4-User-Manual (Accessed November 18, 2023).
Goldberg, D. P., and Hillier, V. F. (1979). A scaled version of the General Health Questionnaire. Psychol. Med. 9, 139–145. doi: 10.1017/S0033291700021644
Goussain, B., Moura, R., Luche, J., Andrade, H., and Silva, M. (2025). “Enhancing learning with physiological measures: a systematic review of applications in neuroeducation,” in Proceedings of the 17th International Conference on Computer Supported Education (CSEDU 2025) - Vol. 1, 111–122. doi: 10.5220/0013438400003932
Gramouseni, F., Tzimourta, K. D., Angelidis, P., Giannakeas, N., and Tsipouras, M. G. (2023). Cognitive assessment based on electroencephalography analysis in virtual and augmented reality environments, using head mounted displays: a systematic review. Big Data Cognit. Comp. 7:163. doi: 10.3390/bdcc7040163
Grassini, S., and Laumann, K. (2020). Questionnaire measures and physiological correlates of presence: a systematic review. Front. Psychol. 11:349. doi: 10.3389/fpsyg.2020.00349
Greco, A., Valenza, G., Lanata, A., Scilingo, E. P., and Citi, L. (2016a). cvxEDA: a convex optimization approach to electrodermal activity processing. IEEE Trans. Biomed. Eng. 63, 797–804. doi: 10.1109/TBME.2015.2474131
Greco, A., Valenza, G., and Scilingo, E. P. (2016b). Advances in Electrodermal Activity Processing with Applications for Mental Health. Cham: Springer International Publishing.
Guy-Evans, O. (n.d.). Brodmann Areas of The Brain: Anatomy And Functions. Available online at: https://www.simplypsychology.org/brodmann-areas.html (Accessed April 08, 2024).
Hartley, C. A., and Poeppel, D. (2020). Beyond the stimulus: a neurohumanities approach to language, music, and emotion. Neuron 108, 597–599. doi: 10.1016/j.neuron.2020.10.021
Hutson, J., and Olsen, T. (2021). Digital humanities and virtual reality: a review of theories and best practices for art history. Int. J. Technol. Educ. 4, 491–500. doi: 10.46328/ijte.150
Jolles, J., and Jolles, D. D. (2021). On neuroeducation: why and how to improve neuroscientific literacy in educational professionals. Front. Psychol. 12:752151. doi: 10.3389/fpsyg.2021.752151
Khedher, A. B., Jraidi, I., and Frasson, C. (2019). Tracking students' mental engagement using EEG signals during an interaction with a virtual learning environment. J. Intellig. Learn. Syst. Appl. 11, 1–14. doi: 10.4236/jilsa.2019.111001
Lessiter, J., Freeman, J., Keogh, E., and Davidoff, J. (2001). A Cross-Media Presence Questionnaire: the ITC-sense of presence inventory. Presence: Teleoperat. Virt. Environm. 10:282–297. doi: 10.1162/105474601300343612
Liu, Y., Yue, K., and Liu, Y. (2024). Behavioral Analysis in immersive learning environments: a systematic literature review and research agenda. Electronics 14:1278. doi: 10.3390/electronics14071278
Moinnereau, M.-A., De Oliveira, A. A., and Falk, T. H. (2022). Immersive media experience: a survey of existing methods and tools for human influential factors assessment. Qual. User Exp. 7:5. doi: 10.1007/s41233-022-00052-1
Murphy, D., and Higgins, C. (2019). Secondary inputs for measuring user engagement in immersive VR education environments. arXiv [preprint] arXiv:1910.01586. doi: 10.48550/arXiv.1910.01586
Ochs, C., and Sonderegger, A. (2022). The interplay between presence and learning. Front. Virtual Real. 3:742509. doi: 10.3389/frvir.2022.742509
Peper, E., Harvery, R., Lin, I.-M., and Tylova, H. (2007). Is there more to blood volume pulse than heart rate variability, respiratory sinus arrhythmia, and cardiorespiratory synchrony? Assoc. Appl. Psychophysiol. Biofeedb. 35, 54–61. Available online at: https://api.semanticscholar.org/CorpusID:15486681 (Accessed February 02, 2025).
Posada-Quintero, H. F., and Chon, K. H. (2020). Innovations in electrodermal activity data collection and signal processing: a systematic review. Sensors 20:479. doi: 10.3390/s20020479
Pradeep, K., Sulur Anbalagan, R., Thangavelu, A. P., Aswathy, S., Jisha, V. G., and Vaisakhi, V. S. (2024). Neuroeducation: understanding neural dynamics in learning and teaching. Front. Educ. 9:1437418. doi: 10.3389/feduc.2024.1437418
Qadir, Z., Chowdhury, E., Ghosh, L., and Konar, A. (2019). “Quantitative analysis of cognitive load test while driving in a VR vs non-VR environment,” in Pattern Recognition and Machine Intelligence, eds. B. Deka, P. Maji, S. Mitra, D. K. Bhattacharyya, P. K. Bora, and S. K. Pal (Cham: Springer International Publishing), 481–489.
Rahma, O., Putra, A., Rahmatillah, A., Putri, Y. K. A., Fajriaty, N., Ain, K., et al. (2022). Electrodermal activity for measuring cognitive and emotional stress level. J. Med. Signals Sens. 12:155. doi: 10.4103/jmss.JMSS_78_20
Romo-De León, R., Cham-Pérez, M. L. L., Elizondo-Villegas, V. A., Villarreal-Villarreal, A., Ortiz-Espinoza, A. A., Vélez-Saboy, C. S., et al. (2024). EEG and physiological signals dataset from participants during traditional and partially immersive learning experiences in humanities. Data 9:68. doi: 10.3390/data9050068
Salovey, P., Mayer, J. D., Goldman, S. L., Turvey, C., and Palfai, T. P. (1995). “Emotional attention, clarity, and repair: exploring emotional intelligence using the Trait Meta-Mood Scale,” in Emotion, Disclosure, textitand Health, eds. J. W. Pennebaker (Washington: American Psychological Association), 125–154.
Škola, F., Rizvić, S., Cozza, M., Barbieri, L., Bruno, F., Skarlatos, D., et al. (2020). Virtual reality with 360-video storytelling in cultural heritage: study of presence, engagement, and immersion. Sensors 20:5851. doi: 10.3390/s20205851
Soler, A., Muñoz-Gutiérrez, P. A., Bueno-López, M., Giraldo, E., and Molinas, M. (2020). Low-density EEG for neural activity reconstruction using multivariate empirical mode decomposition. Front. Neurosci. 14:175. doi: 10.3389/fnins.2020.00175
The MathWorks (n.d.a). Band Power - MATLAB Bandpower. Available online at: https://www.mathworks.com/help/signal/ref/bandpower.html (Accessed November 19 2023).
The MathWorks (n.d.b). MATLAB. Available online at: https://www.mathworks.com/products/newproducts/release2021a.html (Accessed November 18 2023).
The MathWorks (n.d.c). Rank Features for Classification Using Minimum Redundancy Maximum Relevance (MRMR) Algorithm - MATLAB fscmrmr. Available online at: https://www.mathworks.com/help/stats/fscmrmr.html#mw733b9b36-11f2-4aa2-85fc-0988c425cd9 (Accessed November 18 2023).
The MathWorks (n.d.d). Welch's Power Spectral Density Estimate - MATLAB pwelch. Available online at: https://www.mathworks.com/help/signal/ref/pwelch.html (Accessed November 18 2023).
Keywords: electroencephalography, emotions, immersive technologies, learning environments, neurohumanities
Citation: Romo-De León R, Cham-Pérez MLL, Elizondo-Villegas VA, Villarreal-Villarreal A, Ortiz-Espinoza AA, Vélez-Saboyá CS, Lozoya-Santos JdJ, Cebral-Loureda M and Ramírez-Moreno MA (2026) Neurophysiological assessment of biometric patterns during semi-immersive and traditional learning experiences in the humanities. Front. Hum. Neurosci. 20:1692599. doi: 10.3389/fnhum.2026.1692599
Received: 25 August 2025; Revised: 23 December 2025; Accepted: 06 January 2026;
Published: 03 February 2026.
Edited by:
Kuldeep Singh, Guru Nanak Dev University, IndiaReviewed by:
Vignayanandam Ravindernath Muddapu, Azim Premji University, IndiaElham Shamsi, Institute for Research in Fundamental Sciences (IPM), Iran
Copyright © 2026 Romo-De León, Cham-Pérez, Elizondo-Villegas, Villarreal-Villarreal, Ortiz-Espinoza, Vélez-Saboyá, Lozoya-Santos, Cebral-Loureda and Ramírez-Moreno. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Mauricio A. Ramírez-Moreno, bWF1cmljaW8ucmFtaXJlem1AdGVjLm14
†ORCID: Rebeca Romo-De León orcid.org/0009-0001-7950-6535
Mei Li L. Cham-Pérez orcid.org/0009-0006-9736-3570
Verónica Andrea Elizondo-Villegas orcid.org/0009-0009-1785-8963
Alejandro Villarreal-Villarreal orcid.org/0009-0003-4042-5337
Alexandro Antonio Ortiz-Espinoza orcid.org/0000-0002-3945-6908
Carol Stefany Vélez-Saboyá orcid.org/0000-0002-9344-158X
Jorge de Jesús Lozoya-Santos orcid.org/0000-0001-5536-1426
Manuel Cebral-Loureda orcid.org/0000-0001-6359-2427
Mauricio A. Ramírez-Moreno orcid.org/0000-0003-0306-2971
Rebeca Romo-De León1†