Skip to main content

REVIEW article

Front. Bioeng. Biotechnol., 29 June 2023
Sec. Biosensors and Biomolecular Electronics
This article is part of the Research Topic Current Development on Wearable Biosensors towards Biomedical Applications View all 7 articles

Current development of biosensing technologies towards diagnosis of mental diseases

  • 1Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo, China
  • 2Ningbo Research Center, Ningbo Innovation Center, Zhejiang University, Ningbo, China
  • 3Robotics Institute, Ningbo University of Technology, Ningbo, China
  • 4Nottingham Ningbo China Beacons of Excellence Research and Innovation Institute, University of Nottingham Ningbo China, Ningbo, China
  • 5School of Mechanical Engineering, Zhejiang University, Hangzhou, China

The biosensor is an instrument that converts the concentration of biomarkers into electrical signals for detection. Biosensing technology is non-invasive, lightweight, automated, and biocompatible in nature. These features have significantly advanced medical diagnosis, particularly in the diagnosis of mental disorder in recent years. The traditional method of diagnosing mental disorders is time-intensive, expensive, and subject to individual interpretation. It involves a combination of the clinical experience by the psychiatrist and the physical symptoms and self-reported scales provided by the patient. Biosensors on the other hand can objectively and continually detect disease states by monitoring abnormal data in biomarkers. Hence, this paper reviews the application of biosensors in the detection of mental diseases, and the diagnostic methods are divided into five sub-themes of biosensors based on vision, EEG signal, EOG signal, and multi-signal. A prospective application in clinical diagnosis is also discussed.

1 Introduction

Biosensors are instruments that apply bio-sensing elements to collect information recorded through specific biological, physical, and chemical changes, which are converted into measurable signals (Vigneshvar et al., 2016). These properties that can be detected include changes in pH, gas, mass, electron transport, heat transport, and absorption and release of specific ions (Velasco-Garcia and Mottram, 2003; Zhang et al., 2021a). Biosensors have been utilized successfully in many areas, such as biological signal monitoring, environmental surveying, motion observation, gas analysis, health tracking (Zhang et al., 2021a; Liu et al., 2021; Zhang et al., 2022), and in the field of medical applications and healthcare (Guo et al., 2021; Wang et al., 2022). The advancement of biotechnology and new materials have paved the way for the development of advance biosensors that aid the detection of mental diseases (Zhang et al., 2021b). Being noninvasive, low-cost, wearable, sensitive, and dynamic in monitoring, endow the detection technologies with increasing accuracy, response rate, deformability, and biocompatibility (Guo et al., 2019; Yang et al., 2019). This makes them of great value for healthcare practitioners in the treatment of mental diseases.

Mental diseases are disorders that affect cognition, emotion, volition, and behaviors (Hyman et al., 2006). Episodes of mental disease can severely interfere with learning and social skills. Mental diseases often begin early in life and are often chronic relapsing processes (Hyman et al., 2006). According to the World Mental Health Report, about 1 billion people worldwide have mental diseases (Freeman, 2022). The outbreak of COVID-19 has also exacerbated this phenomenon (Li et al., 2020). It is difficult for some patients with mental diseases and chronic diseases to receive continuous treatment during COVID-19, which may lead to relapse of mental diseases and exacerbation of negative emotions (Psychiatry CSo, 2020; Bo et al., 2021). Additionally, symptoms of depression and anxiety have also increased in children and adolescents (Racine et al., 2020). Therefore, research related to mental diseases has received increasing attention. There is a wide range of mental diseases, including Alzheimer’s disease, depression, schizophrenia, autism spectrum disorder, and various personality disorders (Busfield, 2011). The traditional diagnosis of mental diseases requires multiple physical indicators, combined with self-rating scales and nursing reports from guardians and psychiatrists (Möricke et al., 2016; Römhild et al., 2018). Furthermore, the bias of subjective judgment may lead to misdiagnosis, which affects the treatment of the diseases (Huang et al., 2017). Therefore, diagnosing mental disorders around the world is difficult. Compared with traditional diagnostic techniques, biosensors can quantify the qualitative expression of the brain by detecting biomarkers, and avoid cultural and language differences of the subjects (Hidalgo-Mazzei et al., 2018).

Biomarkers are defined as biological characteristics measurable in biological media (such as human tissues, cells, or fluids) as indicators of normal biological processes, pathogenic processes, or pharmacological responses to therapeutic interventions (Mayeux, 2004). Mental diseases are often accompanied by the impairment of some functions, e.g., impaired visual attention processing, social dysfunction, and restrictive, repetitive behaviors (Schmidt et al., 2011; Edition FJAPA, 2013). Abnormality of these biomarkers can be measured by biosensors to distinguish the presence and absence of disease states (Schmidt et al., 2011), which provides a more convenient and effective tool for rapid diagnosis. These bio-sensors utilize cutting-edge technology to not only collect and compare biological data, such as eye-tracking data, electroencephalography (EEG), electrooculogram (EOG), and cognition and behavior in virtual reality (VR), but also employ machine learning algorithms to extract biometrics for the more objective results and higher accuracy (Burdea and Coiffet, 2003; Plitt et al., 2015; Ibrahim et al., 2018; Yaneva et al., 2018; Yamagata et al., 2019; Zhao et al., 2021). The multiple-signal sensors combined with machine learning were found to be a relatively novel trend, which offer a more comprehensive understanding of participant responses. However, the pre-preparation and collection operations of a single biosensor are already relatively cumbersome. The popularization of large-scale mental disease screening and telemedicine requires a more efficient, higher-precision and wearable multi-signal integrated sensor in the future.

This review summarizes the research trend of biosensors towards the detection of mental diseases. The review focuses on five objective quantification methods of the field, namely vision-based, EEG signal, EOG signal, VR-based, and multiple-signal sensors. Vision-based sensors detect mental diseases associated with abnormal visual attention through eye-tracking devices. EEG and EOG signals can detect mental diseases associated with abnormal brain and eye signals. In addition, VR-based sensors offer more possibilities for detecting mental disease associated with impaired spatial navigation and memory. The use of multiple-signal sensor can improve diagnosis accuracy and efficiency. The review also describes detection devices, signal processing methods, and disease assessment techniques in detail. The future research directions and application prospects are also covered in this review.

2 Mental diseases

2.1 AD (Alzheimer’s disease) and MCI (mild cognitive impairment)

Alzheimer’s disease, whose chief symptoms are memory impairment, attention deficit, and executive dysfunction, is one of the most common neurodegenerative diseases worldwide that lead to dementia (Tschanz et al., 2006; Belleville et al., 2007). According to the Alzheimer’s Association, more than 10 percent of people over 65 in the United States suffer from this kind of disease, and the proportion reaches nearly half among people over 85 (Meek et al., 1998). As the ageing of the population, the prevalence of Alzheimer’s disease is predicted to increase twice before 2050 (Mattson, 2004). However, until now, there is still no effective cure for Alzheimer’s disease, and medicines can relieve the symptoms to some extent (Dauwels et al., 2010). Mild cognitive impairment (MCI) is the intermediate stage between the cognitive decline associated with normal ageing and early AD (Gauthier et al., 2006). Hence, early detection of MCI is significant in preventing AD. The traditional diagnostic methods for Alzheimer’s disease need to combine other techniques, including medicinal history analysis, neurological tests, blood tests, and psychological tests, which are highly tedious, subjective, and expensive (Sunderland et al., 2006; Weiner MJTjon et al., 2009). Currently, more efficient and accurate diagnostic approaches are badly needed.

2.2 ASD (autism spectrum disorder)

Autism spectrum disorder is a neurodevelopmental disorder characterized by social communication disorders and repetitive behaviors (Edition FJAPA, 2013). Autism affects more than 70 million people worldwide, and about 1 in 68 children suffer from this disorder (Wang et al., 2013). The cause of ASD is still unknown. According to available scientific evidence from the World Health Organization, many factors can lead to ASD in children, including abnormal brain development and neural reorganization in early childhood (Black et al., 2017). O date, the diagnosis of ASD has typically been based on observation of daily functionality and behavioral characteristics of patients. The Diagnostic and Statistical Manual for Autism (DSM-V) provides diagnostic criteria for referencing clinical symptoms. Other ancillary assessment tools, including behavioral scale tests, checklists, and questionnaires, supplement diagnostic results (PEREIRA et al., 2007). The standards to diagnose ASD are incredibly complex and diverse, resulting in dysfunctions with variation among individuals. Hence, these evaluation methods are not objective due to no reliable biomarkers. Furthermore, more accurate and objective clinical detection methods towards ASD are badly needed.

2.3 Depression

Depression is a common mood disorder. The data published by the World Health Organization showed that more than 300 million people worldwide suffer from depression (Organization, 2017). Depression results from various factors, including the interaction of social, psychological, and biological factors, and thus the causes of depression are complex. Depression is different from short-lived sadness in everyday life. The process of depression is recurrent, including periods with symptomatic episodes and periods of recovery. During a depressive episode, patients feel a low mood, anhedonia, and low energy. In the worst cases, depression can even lead to suicide. The people who experienced depressed moods almost every day or for at least 2 weeks were diagnosed with depression according to the DSM-IV classification (Wong and Licinio, 2001). However, in developing countries, people with depression are often not detected as early as possible (Chisholm et al., 2016). To this end, the researchers are trying to identify biometric markers that could be used to distinguish depression patients from healthy individuals in primary care. Among the biometric markers, attention bias has become the optimal marker and has been applied in various works. Studies of attention bias in clinical disorders have inferred attention bias by assessing reaction time. Studies have shown that people with depression focus more on negative stimuli than positive ones (Klawohn et al., 2020).

2.4 Schizophrenia

Schizophrenia is a multi-attribute chronic disease, resulting in sensory perception, thinking, emotion, volitional behavior, and cognitive dysfunction in the brain. These symptoms vary widely among patients. Existing instruments are not powerful enough to fully assess the mental states of patients. The traditional detection of schizophrenia is based on a diagnosis from the psychiatrist and supplemented by Computed Tomography (CT) diagnosis, Magnetic Resonance Imageing (MRI) diagnosis, and Positron Emission Computed Tomography (PET) diagnosis (Andreasen, 1995). Therefore, detection methods independent of schizophrenic drugs and the mental states of patients are being explored.

2.5 Sleep disorder

Sleep quality is an essential indicator for assessing human homeostasis. Sleep disorder is a kind of mental disease associated with severe medical, psychological, and social obstacles (Silber et al., 2007). Another widely used method is polysomnographic (PSG), which uses a variety of electrodes and sensors to record about ten signals during sleep (Yildirim et al., 2019). This method is complex and expensive. Therefore, it is necessary to develop a new automated sleep disorder detection system that can help doctors assess the sleep stages of patients more efficiently and accurately. When people fall asleep, their eye movements tend to slow down and their EOG values tend to be lower (Carskadon and Dement, 2005). The EOG signals help distinguish between rapid eye movement and non-rapid eye movement. Due to the significant progress made in object detection by deep learning technology (LeCun et al., 2015), various subjective diagnosis methods based on the EOG signals have been proposed.

2.6 Epilepsy

Epilepsy is a chronic brain disease in which neuronal activity is abnormally synchronized or excessive, characterized by transient and recurrent seizures. Frontal lobe epilepsy (TLE) is a standard focal epilepsy diagnosed by advanced imaging examination and EEG. Most TLE patients are associated with memory deficit and attention disorder (Blackwood et al., 1994; Bocquillon et al., 2009). Traditional diagnosis of epilepsy is based on patient and eyewitness descriptions and video recordings of seizures. Unfortunately, it is common for seizures to be misdiagnosed as other disorders, e.g., convulsive syncope (Bocquillon et al., 2009). Consequently, there is a need for an accurate way to discriminate epilepsy.

Vision-based sensors can detect AD, ASD and depression. Depression and ASD also can be diagnosed by EEG sensors, as well as epilepsy, schizophrenia, and epilepsy. EOG sensors are capable of diagnosing schizophrenia and sleep disorders which are detectable by EEG signal sensors. VR-based sensors can detect mental diseases including all of the above except sleep disorders. Multiple-signal sensors can diagnose all of the above mental diseases.

3 Applications

3.1 Vision-based sensors

Eye-tracking is the process of measuring individual eye movements and gaze positions to reflect gaze behavior. Usually, when a person gazes at an object, attention is shifted to a specific point in order to be able to examine in detail the image occupying the direction of the gaze center (Kennedy, 2016). As shown in Figure 1, the prevalently used eye-tracking technologies are table-mounted and head-mounted video-based eye-trackers (Hutton, 2019; Carter and Luke, 2020). The eye trackers can record forms of eye movements, including fixations, saccades, and other types (blinks, smooth pursuits, and vergence) by measuring the position of infrared corneal reflection relative to the pupil (Rayner, 2009; Carter and Luke, 2020). In the eye-tracking task, participants are required to gaze at images (i.e., pictures, videos, and web pages) (Zhao et al., 2021) in order to provide information about the attention allocation of a person in a visual scene (Ashraf et al., 2018). Multiple attempts to diagnose mental diseases based on eye-tracking technologies have shown a bright future in clinical diagnosis.

FIGURE 1
www.frontiersin.org

FIGURE 1. (i) The portable video-based eye-tracker (A) The operator (B) The Participant (Hutton, 2019). (iiA) The head-mounted video-based eye-tracker (Gotardi et al., 2020). (B) The participant wears the eye-tracker (Kapp et al., 2021). (iiiA) The forms of eye movements in the reading scene (B) The saccades and fixations in images (Carter and Luke, 2020).

The selective attention is defined as the ability to screen out relevant and applicable information (Levinoff et al., 2004). Given that selective attention, associated with activity in neurotransmitter function, is impaired in the early stages of AD (Perry and Hodges, 1999; Rangel-Gomez and MJJop, 2016). The study of selective attention to new stimuli can provide new insight into the cognition and attention of AD patients. Chau et al. proposed a non-verbal and non-invasive diagnostic method to predict cognitive decline from mild to moderate AD patients by estimating the novelty preference of patients (Chau et al., 2017). The binocular eye-tracking systems recorded the eye gaze data, including the position, time, and frequency when AD patients viewed novel and repeated images on slides (Figures 2i. These visual scanning parameters were processed by automatic classification algorithms to estimate novelty preference. Standardized Mini-Mental Status Examination (SMMSE) and Conners Continuous Performance Test (CPT) were used to assess the cognition and attention of patients separately. The results showed that impaired people have less attention or preference for novel images than the control group.

FIGURE 2
www.frontiersin.org

FIGURE 2. (i) The slides started with four new images, and after 10.5 s switched to the next slide with two new images and two duplicate images (Chau et al., 2017). (iiA) The scan path of ASD participants during the task of reading the BBC website (Yaneva et al., 2018). (B) The analysis of visual gaze features is based on four areas of interest, including eyes, mouth, whole face and whole body (Zhao et al., 2021). (iii) The sad and neutral facial expressions in the left 4 × 4 grid and the happy and neutral facial expressions in right 4 × 4 grid (Klawohn et al., 2020).

Yaneva et al. (2018) described an unobtrusive method for detecting ASD through two everyday tasks: browsing and searching web information (Figures 2ii,A). Six web pages were randomly presented to 36 participants (16 ASD patients and 16 non-ASD controls) and their gaze data were collected by an eye-tracker in 2 minutes. Five gaze features (i.e., time viewed %, time viewed sec, time to first view, fixations and revisits) and five non-gaze features (i.e., participant gender, AOI ID, correct answer AOI, media ID, and level of visual complexity) were used to train a machine learning classifier to identify whether participants had ASD by Logistic Regression (LR) with 100-fold cross-validation. The basic principle of this method is to reveal the different attention-shifting mechanisms of two groups based on the new alternative marker of attention differences. The preliminary result showed that the search task elicited higher differences between the two groups than the browsing task. The classifier achieved 0.75 classification accuracy, which could be proved effective in detecting ASD. In addition to the web task, Zhao et al. (2021) proposed another approach for diagnosing ASD based on eye-tracking data from face-to-face conversations. Participants wearing a head-mounted eye-tracker had a conversation with a female interviewer (see Figures 2iiB). The four arranged informal sessions included general questions, hobbies, yes-no questions, and questions from respondents. Afterwards, the visual fixation features and session length features from the eye-tracking data were extracted to determine which of the four machine learning classifiers had optimal classification accuracy by implementing forward feature selection. The classifiers include Support Vector Machine (SVM), Linear Discriminant Analysis, Decision Tree, and Random Forest (RF). The results proved that all classifiers reached a maximum classification accuracy of more than 84%, and the SVM classifier achieved the highest classification accuracy of 92.31%. Consequently, children with ASD can be preliminarily diagnosed in daily life through face-to-face conversations.

Sanchez et al. further elaborated on the role of attentional bias in diagnosing depression (Sanchez et al., 2013). They designed an eye-tracking task to evaluate the disengagement of attention from emotional stimuli and test whether mood changes during and after stimulation in depressed patients are associated with visual dissociation difficulties. 19 participants with Major Depression Disorder and 16 healthy participants were free to view emotional (i.e., happy, sad, and angry) and neutral faces. Eye-tracking devices synchronously record the initial orientation, fixation frequency, and fixation time during 3,000 ms. In the subsequent engagement-disengagement task, an attention interval was also recorded when moving from emotional to neutral faces. Studies have found that people with depression took longer to eliminate depression-related emotions, e.g., sad faces. In other words, with the increase in depression, the disengagement from negative information becomes slow. Therefore, negative attention disengagement difficulties can be regarded as a critical feature of depression. Ferrari et al. replicated the above Sanchez experiment and further proposed the attentional bias modification (ABM) tasks (Ferrari et al., 2016). The ABM tasks, containing positive training (PT) and negative training (NT), were designed to evaluate and train the components that interfere with attention in depression. The participants included 78 female and 17 male college students and were assigned to PT (n = 48) or NT (n = 47) in a double-blind fashion. In the PT, the test continued when the participants separated from negative pictures and gazed at the positive pictures for 1000 ms. In contrast, in the NT, participants were asked to sustain attention to negative pictures for 1000 ms and focus away from positive ones. The results showed that participants gazed at the positive images longer and disengaged from the negative images more quickly in the PT. However, there is no change in attentional processes in the NT. The two groups showed no difference in emotional responses and recovery from stress. This disengagement training was supposed to increase attentional bias towards positive information and facilitate disengagement from negative information, which has significant relevance for the treatment of depression. To investigate the relationship between attention bias and other influencing factors. Lu et al. (2017) proposed a free-viewing eye movement task to compare the differences in attentional bias between depressed patients and healthy subjects at different ages. The trials were divided into two groups: happy-neutral faces and sad-neutral faces. The participants were also divided into two groups: young (18–30 years old) and middle-aged (31–55 years old). Compared with healthy subjects, depressed patients tended to pay less attention to the positive stimuli and more to the negative stimuli. Among major depressive disorder (MDD) patients, the middle-aged group had less positive attention bias than the younger group, and there was no difference in negative attention bias between the two groups. This study further demonstrated that emotional bias in depression was correlated with age. The complex visual arrays were employed by Klawohn et al. (2020) to examine the attention bias of depressed patients to facial expressions. Participants were free to view two sets of four-by-four arrays of facial expressions (see Figures 2iii). The first group was made up of sad and neutral faces, and the other of happy and neutral faces. At the same time, Eyelink 1000 + recorded the dwell time of participants on different facial expressions. Compared to healthy individuals, depressed individuals spend more time gazing at sad faces, which indicates an abnormal attentional bias to negative stimuli. Moreover, healthy and depressed groups had higher dwell times on happy expressions. Besides, the study further demonstrated that attentional bias to negative stimuli was not associated with the severity and chronicity of depression but associated with external environmental factors, including childhood trauma and sad events in contemporary life.

It is evident that vision-based sensors can be utilized to assess attentional biases in mental diseases. Table 1 provides a good overview summary of examples of the use of vision-based sensors for mental disease diagnosis. People with AD and depression pay less attention to positive and new things. And negative information affect people with depression for a longer period. People with ASD lose sight of the whole and focus on the details. Deep learning was used to diagnose AD and ASD patients and achieved the highest accuracy rate of 92.31%. Gaze-based devices, i.e., eye trackers, have the advantages of being non-invasive and wearable, and eye-tracking data are easily collected. However, the presence of blink and jitter in eye movement data introduces noise and errors for diagnosis. In addition, this method has some limitations including long preparation time, fragility, and non-myopic participants.

TABLE 1
www.frontiersin.org

TABLE 1. Examples of vision-based sensors towards mental disease diagnosis. SMMSE, mini-mental status examination; CPT, conners continuous performance test; AD, alzheimer’s diseases; LR, logistic regression; SVM, support vector machine; PT, positive training; NT, negative training; MDD, major depressive disorder.

3.2 EEG signal sensors

Brain function is based on electrical signals among neurons in the brain. Due to the mood, mental state and attention being all controlled by different brain regions, mental diseases caused by brain damage can be diagnosed by EEG (Liu et al., 2013). EEG is a non-invasive, effective tool for measuring the electric activities of the brain and monitoring the changes in brain functions at rest or during stimulation. The principle is to measure tiny fluctuations in the electrical current between the skin and the sensor electrodes, amplify the current and perform filtering (Sullivan et al., 2007). The American clinical neurophysiology society recommends collecting brain electromagnetic activity through 10–20 and 10–10 electrode placement systems (Figures 3ii (Acharya et al., 2016). The capital letters F, C, T, P, and O stand for frontal, central, temporal, posterior, and occipital, respectively, and represent where the electrodes are attached to the skull. The numbers indicate the left and right sides of the brain (Abhang et al., 2016). When collecting EEG data, people should wear an EEG device to maintain a consistent electrical connection between the electrodes of the sensor and the scalp, which can be achieved by various methods (Figures 3i, iii), including dry EEG devices, saline solution EEG devices, and soft gel-based EEG devices (Soufineyestani et al., 2020). The dry EEG devices do not require any saline and gel, or even direct connection to the scalp, enabling a shorter up-front setup time than wet EEG devices. As computer-aided diagnosis (CAD) has become an important part of the medical industry, some researchers have attempted to diagnose mental diseases by using machine learning techniques to extract features from EEG signals (Khare and Bajaj, 2020; Khare et al., 2020).

FIGURE 3
www.frontiersin.org

FIGURE 3. (iA) Dry EEG devices. (B) Saline solution EEG devices. (C) Soft gel-based EEG devices (Soufineyestani et al., 2020). (ii) The position and name of each electrode in the 10 ± 20 system (Soufineyestani et al., 2020). (iii) Real-time EEG signal recording system (Liu et al., 2013). (ivA) The methods of signal acquisition. (B) The types and techniques of signal processing. (C) Methods of feature extraction. (D) Machine learning algorithms (Sachadev et al., 2022).

As shown in Figures 3iv, collecting raw EEG signals involves many activities and methods, including Event-Related Potential, Emotion Recognition, Sleep Stage Scores, Motor Imagery, Seizure Detection Mental Workload. According to the frequency range of EEG signals, the central frequencies of human brain waves can be divided into Alpha, Beta, Delta, Gamma, and Theta bands (Liu et al., 2013). The methods of preprocessing include Adaptive Filter, Surface Laplacian, Independent Component Analysis, Common Spatial Patterns, Common Average Reference, Surface Laplacian, and Principal Component Analysis, which purpose is to improve the signal-to-noise ratio, remove artefacts, interference, and noise, and retain the pure EEG signal. After preprocessing the EEG data, various features are extracted. Wavelet Transform, Fast Fourier Transform, Principal Component Analysis, Independent Component Analysis, Power Spectrum Density, Autoregressive Method, Eigenvectors, and time-frequency Distribution can be utilized. Artificial intelligence and machine learning models, including K Nearest Neighbors, Support Vector Machine, Deep Learning, Artificial Neural Network, Linear Discriminant Analysis, and Naive Bayes, were used to calculate and analyze these features to classify healthy people and patients (Sachadev et al., 2022). Presently, EEG can diagnose AD, sleep disorders, and brain tumours (Cai et al., 2020; Duan et al., 2020; Akbari et al., 2021a). Recent studies have focused on detecting ASD, depression, epilepsy and schizophrenia.

Social communication is based mainly on nonverbal behavior to understand the intentions of others, e.g., gestures, actions, and emotions. These past studies provides evidence that people with ASD often have difficulty with social communication and recognition of facial emotions (Black et al., 2017; Tawhid et al., 2020). According to these symptoms, Paula et al. (2017) proposed an approach to diagnosing ASD by analyzing the changes in high-frequency EEG as children observe three different facial expressions (happy, neutral, and angry). In this visual stimulation task, photos with different expressions were played for 3 s each with intervals of 0.5–1.0 s controlled by Mangold Vision 3.9. The EEG-1200 measured EEG data about neural activity caused by the stimulus, and eye-Tech TM3 ensured that participants viewed fixed points and visual stimuli during this time. The EDF Browser converted each subject’s raw EEG data into ASCII format. Python programming language and MATLAB code were applied to process these EEG data. Their mean and standard deviation could be calculated to observe changes in local field potentials associated with events. After comparing EEG data from 8 children with ASD and 8 healthy children, the study found that the ASD group presented a more vital power spectrum in high frequencies (above 30 HZ) compared to controls in some regions, including the frontal, occipital, and mid-parietal regions of the brain. The EEG signals of the occipital and central parietal regions showed significant differences. The central region and parietal lobe presented similar patterns. Consequently, the findings demonstrated that the EEG activity of children with and without ASD showed different quantitative patterns of the power spectrum when observing facial expressions. Another effective diagnostic system was developed by Tawhid et al. (2020), which can automatically identify ASD based on a 2D spectrogram images of EEG signals. The novelty of the study is in the conversion of EEG signals into time-frequency based spectrogram images. The preprocessed EEG signal was converted into a two-dimensional (2D) image by short-time Fourier transform. Then used ternary CENTRIST to extract texture features. 10-fold cross-validation by the SVM classifier yielded an average classification accuracy (ACC) of 95.25%, a sensitivity of 97.07%, and a specificity of 90.95%.

Currently, EEG is also a popular tool to investigate the presence of depression biomarkers. Previous studies have demonstrated that the EEG signals of depressive patients presented as unpredictable and have lower morphological complexity (Sharma et al., 2018; Sadiq et al., 2020). Sadiq et al. designed a user-friendly method based on centered correntropy (CC) of rhythms in empirical wavelet transform (EWT) to classify EEG signals in people with 22 depression and 22 non-depression subjects (Akbari et al., 2021b). The non-stationary EEG signals were decomposed into rhythms by EWT (Figures 4i,A), and the EWT filter bank was created. CC was calculated from the decomposed delta, theta, alpha, beta, and gamma rhythms, regarded as the discrimination feature, and then fed to SVM and KNN (K Nearest Neighbors) classifiers (Figures 4i,B, C). The area under the receiver operating characteristic curve (AUC) quantifies the ability to classify depression and standard EEG signals. The AUC values of the SVM classifier with radial basis function kernel and KNN classifier with city block and Euclidian were compared to prove the performance of the two classifiers. The proposed method ultimately achieved 98.76% ACC in a 10-fold cross-validation strategy. Subsequently, Sadiq et al. further employed reconstructed phase space (RPS) of EEG signals and geometrical features for depression detection (Akbari et al., 2021a). The method of classification and evaluation of the selected features was the same as the previous experiment, which used SVM and KNN classifier. The difference was in the processing stage. The EEG signals of the left and right hemispheres were plotted by PRS in 2D space (Figures 4ii), and 34 nonlinear geometrical features were extracted from these characteristics. Four optimization algorithms, Ant Colony Optimization (ACO), Grey Wolf Optimization (GWO), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO), were used to reduce the feature vectors, and the performance was compared. GA achieved better performance due to a 58.8% reduction in feature vector arrays. The framework using the PSO algorithm and SVM classifier finally achieved 99.3% ACC and a Matthews correlation coefficient (MCC) of 0.98 in the right and 0.95 in the left hemispheres. Therefore, it could be concluded that EEG signals can be used as biomarkers to detect depression. Furthermore, EEG signals from the right hemisphere are more critical for detecting depression than those from the left. In the following work, Sadiq et al. (2021) found a novel CAD system to detect depression automatically. In this study, bipolar channels “FP1-T3" and “FP2-T4" from the left and right brain were used to collect EEG records for 10 min at 256 Hz sampling frequency. By new 2D modelling of intrinsic mode functions (2D-IMFs) (Figures 4iii), Binary Particle Swarm Optimization (B-PSO) algorithm, and KNN classifier, depression and epilepsy could be classified and diagnosed. As a result, this system possessed various advantages, including time-saving, a high classification accuracy of 93.35%, and multirole adaptability. This work provided a novel way to diagnose two mental diseases with one algorithm.

FIGURE 4
www.frontiersin.org

FIGURE 4. (iA) EWT filter bank. (B) The decomposed rhythms from the left and right hemispheres of the normal and depressed groups. (C) The AUC values of SVM and KNN classifiers (Akbari et al., 2021b). (iiA) The samples of normal and depressed RPS for the left and right hemispheres. (B) Quantified the geometric features of the RPS pattern of EEG signals in 2D space (Akbari et al., 2021a). (iii) 2D-IMFs of EEG signals in depression (Sadiq et al., 2021).

Another EEG signal classification method for the diagnosis of epilepsy was proposed by Akbari and Esmaili (2020). The novelty of this approach lies in the use of second-order difference maps and phase-space reconstruction to map EEG signals in 2D space. 500 interictal, ictal and normal EEG signals were extracted with 6 features in different aspects of distance in Cartesian space. These features are named circle area, area of the octagon, summation of vectors length, ircular radius in triangles, triangle area and centroid to centroid, respectively. More regular geometries appear in 2D projections of interictal and normal EEG signals. At the same time, the edges of the 2D EEG projection signals in the ictal group appeared clearer than those in the other two groups. This method achieves 99.3% ACC under the 10-fold cross-validation strategy through SVM and KNN classifiers. The same method was also applied to the diagnosis of schizophrenia by Akbari et al. (2021c). EEG samples included 14 subjects with schizophrenia and 14 controls. Recorded on a 10–20 system with 19 EEG channels. Fifteen graph features were extracted to evaluate the chaotic behavior of phase space dynamics. The results showed that the KNN classifier with City-block distance achieved 94.80% ACC under the 10-fold cross-validation strategy. Another machine learning classifier for diagnosing schizophrenia based on EEG signals of event-related potentials (ERP) was designed by Zhang (2019) The EEG signal of motor actions was shown to be one of the biomarkers for identifying schizophrenia (Ford et al., 2014). Hence, the characteristics of the EEG signals of 49 schizophrenia patients and 32 healthy controls were captured by the sensory tasks including button pressing or/and auditory. The results proved that the highest classification accuracy of 81.1% was achieved by the RF algorithm.

The different diagnostic methods based on EEG signals are shown in Table 2. Greater activation in the frontal, occipital, parietal, and central regions could be considered diagnostic criteria for ASD. CADS (i.e., Fast Fourier Transform, Wavelet Transform) were utilized to reduce the complexity of EEG signals. And automatic classification was realized by machine learning methods (i.e., SVM, KNN, Ternary CENTRIST and RF) for extracting features of EEG signals. The classification accuracies for both depression and epilepsy were as high as 99.3%. And the diagnostic accuracy rate of schizophrenia reached 94.8%. Furthermore, the goal of simultaneous diagnosis of two mental diseases (epilepsy and depression) was achieved by EEG. The experiments in Table 2 are all based on short-range EEG. Although this method is relatively quick and low cost, there are limitations in the ability to capture the abnormal brain waves of patients during the onset. In the future, 24-h monitoring and mobile brain-computer interface (BCI) will gradually become potential directions for general EEG applications (Chi et al., 2012; Amaral et al., 2018).

TABLE 2
www.frontiersin.org

TABLE 2. Examples of EEG signal sensors towards mental diseases diagnosis. ERP: event related potential; ASD, autism spectrum disorder; Ternary CENTRIST, a texture classifier; CC, correntropy; ACC, average classification accuracy; KNN, k nearest neighbors; EWT, empirical wavelet transforms; B-PSO, binary particle swarm optimization; RF, random forest.

3.3 EOG signal sensors

The electrooculogram sensor is another tool that can detect mental diseases. Both EOG and EEG use an array of electrodes to capture signals (Kaur, 2021). EOG is a method of sensing eye movement that measures the retinal electrostatic potential between the retinal pigment epithelium and photoreceptor cells. EOG signals are recorded by placing a series of skin electrodes on the lateral and medial canthus (or above and below the eyelids) of each eye, which measure horizontal (or vertical) eye movements. Moreover, ground electrodes are attached to the earlobe or forehead (Figure 5) (Bulling et al., 2011; Creel, 2019). Measuring EOG signals requires patients to acclimatize in a well-lit room for at least 30 min. Before the test, there was a light acclimation period of about 10 min. After the electrodes are attached, there are 15 min of dark and bright light phases. The movement of the eyes creates a voltage fluctuation of approximately 2–5 mV between the electrodes on either side of the eye, which is plotted on the computer (Creel, 2019).

FIGURE 5
www.frontiersin.org

FIGURE 5. (i) The wearable EOG equipment (Majaranta and Bulling, 2014). (ii) Position of electrode for recording EOG signal (v: vertical, h: horizontal, and r: reference) (Bulling et al., 2011; Creel, 2019). (iii) EOG eye movements were recorded in three phases (Creel, 2019).

The abnormality of smooth pursuit eye movement (SPEM) was demonstrated as a genetic marker of schizophrenia in preliminary experiments (Siever et al., 1985; Matsue et al., 1994). Moreover, the same dysfunction is present in first-degree relatives of schizophrenics (Holzman et al., 1974). Kathmann et al. (2003) described a method to evaluate the integrity of the SPEM system by comparing the eye velocity and target velocity. The study subjects included 103 schizophrenic patients, 53 relatives of patients and 72 healthy controls. In the eye-tracking task, the movement of white circles was tracked by subjects, which was combined with auditory and visual distraction tasks. At the same time, horizontal eye movements were recorded by EOG. The pursuit gain, the velocity of eye movement divided by the velocity of target movement, showed noteworthy gain deficits in both schizophrenia and affective disorders. And unaffected biological relatives of schizophrenia patients had lower pursuit gain than healthy subjects. This finding further supported that deficit in the gain of SPEM could be regarded as a phenotypic marker of genetic predisposition to schizophrenia. In addition to the abnormity of SPEM, Quality Extinction Test (QET) is another indicator that can identify the damage to the parietal and frontal lobes (Scarone et al., 1982). Scarone et al. reported a higher incidence of left-side extinctions on QET in patients with schizophrenia (Scarone et al., 1982; Scarone et al., 1983). Accordingly, this research team further performed a method to simultaneously investigate two features of central nervous system disorder in schizophrenia, namely abnormalities of SPEM and abnormalities of QET (represented by loss of touch). The results of the experiment indicated that more patients with left-side extinctions had SPEM abnormalities, which suggested that simultaneous impairment of these two psychophysiological indicators can be applied to detect schizophrenics (Scarone et al., 1987). Besides, Lencer et al. (2000) tried to determine whether genetic factors influenced families with sporadic schizophrenia (single occurrences of schizophrenia) by dysfunction of smooth pursuit performance. The saccade amplitudes, saccade rates, and gains of three kinds of family including 8 families with sporadic schizophrenia (n = 44), 8 families with multiple occurrences of the disease (n = 66), and 9 healthy families (n = 77) were recorded by infrared eye-tracker. The amplitudes and rates of saccades and gain values in families with sporadic schizophrenia significantly differed from healthy controls at 30°/sec triangle-wave stimulation, but not from familial-schizophrenia families. In addition, target direction significantly affected smooth-pursuit maintenance in families with familial and sporadic schizophrenia. The results supported the hypothesis that genetic factors may be indicated in sporadic-schizophrenia families.

Due to the development of deep learning, research on the automatic detection of sleep disorders through EOG has been gradually carried out. Rahman et al. (2018) developed a dynamic and automatic sleep scoring system based on single-channel EOG. In this work, EOG signals of 38 patients in different stages of sleep, namely Awake, S1, S2, S3, S4, and REM (rapid eye movement) (Rechtschaffen AJBis, 1968), were extracted from three public databases (SLEEP-EDF, SLEEP-EDFX, and ISRUC-SLEEP). And these signals were decomposed by Discrete Wavelet Transform (DWT) in order to extract highly discriminatory features from them, e.g., Moment-based Measures. After that, Neighborhood Component Analysis, a feature reduction technique, was employed to reduce features and compact the model. RF, SVM, and RUSBoot (Random Under-Sampling Boosting) algorithms were used to classify the data from the three databases in different classification stages, respectively. The proposed method had higher accuracy compared with other sleep stage classification techniques based on single-channel EEG, single-channel EOG, and dual-channel EOG, and made significant progress in the S1 sleep stage, which is difficult to detect visually. Besides, Tasnim et al. (2019) designed another sleep state classification method based on single-channel EOG. The raw data were obtained from the SLEEP-EDF database, which was divided into 6 Sleep stages as above. The features of EOG signals were extracted and reduced through Variational Mode Decomposition and Neighborhood Component Analysis to obtain meaningful statistical features like Spectral Entropy Measures. In the training model stage, three widely used classifiers, RUSBoost, RF, and KNN, were used. It has been found that the proposed algorithm achieves 96.537%, 93.05%, 90.57%, 89.21%, and 88.083% overall accuracy in the classification stage of from 2-class to 6-class sleep, respectively, and 65.092% accuracy in the S1. In the work of Sharma et al. (2022), a system for the automatic recognition of sleep disorders based on a single-modal EOG signal was proposed. Besides, the EMG signal was also recorded. The database used in the study was the PhysioNet database, which contained the records of 16 healthy patients and 92 patients with sleep disorders. DWT was used to decompose EOG and EMG signals and extract Hjorth parameters (HOP) features. Highly discriminative HOP features were sent to different classifiers including SVM, Boosted Tree (BT), and Ensemble Bagged Tree Classification (EBTC). Among them, the EBTC classifier with 10-fold cross-validation technology showed the best performance, achieving 94.3% accuracy when using the deep sleep stage data.

Schizophrenia and sleep disorders can be diagnosed based on the EOG signal. As shown in Table 3. Schizophrenia patients and their unaffected relatives showed significantly lower pursuit gain and left-sided patients showed abnormal SPEM. In addition, machine learning methods including RUSBoost, EBTC, RF, SVM and KNN were used to extract the features of eye signals during sleep, and the highest accuracy rate reached 96.537%. EOG devices are lightweight, easy to wear, and can even capture eye signals during sleep. However, a significant amount of time is required for pre-examination preparation and pre-adaptation before the examination.

TABLE 3
www.frontiersin.org

TABLE 3. Examples of EOG signal sensors towards mental diseases diagnosis. SPEM, smooth pursuit eye movement; QET, quality extinction test; RF, random forest; RUSBoot, random under-sampling boosting; BT, boosted tree; EBTC, ensemble bagged tree classification; REM, rapid eye movement.

3.4 VR-based sensors

Emerging VR applications, which are intended for navigation and orientation, cognitive and memory functions, facial identification, and other instrumental activities of daily life, have exhibited practical uses in neuropsychological assessments (García-Betances et al., 2015). Compared with the three methods mentioned in Section 3.1 and Section 3.3, more kinds of mental disease can be diagnosed through VR. VR transmits information from real life to the virtual world, providing the possibility for people to perform activities in the virtual space (Goharinejad et al., 2022). From a research perspective, the use of VR allows for the repetition of clinical practice and continuous data collection in a virtual world (Tieri et al., 2018), and provides excellent visual and auditory immersion and interaction during tasks (Climent et al., 2021). According to different degrees of immersion, VR can be divided into fully immersive, semi-immersive and non-immersive. Fully immersive VR devices are typically equipped with head-mounted displays (HMDs), data gloves, gesture control armbands, gamepads, and speakers (Figure 6), placing participants inside a virtual environment for the highest level of immersion. Semi-immersive VR presents a visual virtual environment through a relatively large flat-screen display. In non-immersive systems, participants interact using traditional PC monitors, keyboards, and mice (Kim et al., 2009; Anthes et al., 2016; Mehrfard et al., 2019; Yeung et al., 2021).

FIGURE 6
www.frontiersin.org

FIGURE 6. (i) The most prominent HMDs. (A) The Samsung GearVRInnovator Edition. (B) The Sony PlayStation VR. (C) The Google Cardboard. (ii) The most typical controllers. (A) The Oculus Half Moon. (B) The Glove ONE. (C) The Myo gesture control armband (Anthes et al., 2016).

In the early stages, numerous researchers used VR to diagnose the impairment of daily activities in AD. Allain et al. (2014) assessed daily mobility impairment in AD patients through a non-immersive virtual coffee task. Performance evaluation includes five indicators: Time to completion, accomplishment score, omission errors score, commission errors score and total errors. The results showed that AD patients fared worse than healthy controls. Yamaguchi et al. (2012) developed a dual-modality VR platform for training AD patients on toast and coffee tasks. AD patients had more omissions, mistakes, and repetitive behaviors during tasks than controls, demonstrating deficits in daily activities in AD patients. However, AD patients achieved performance levels similar to controls in a short period of time after learning. In recent years, increasing evidence has shown that impaired spatial navigation and orientation deficits are important biomarkers for cognitive assessment in patients with MCI. Patients exhibited impairments in the ability to use egocentric (eye/head/body-based) and non-centric (map-based) navigational strategies, which were associated with impaired memory and decreased attention early in AD (Coughlan et al., 2018a; Coughlan et al., 2018b; Puthusseryppady et al., 2020). Serino et al. (2015) designed a non-immersive VR-based procedure to assess the ability for encoding, storing, and synchronizing different spatial representations. Participants included 15 amnestic MCI patients, 15 AD patients, and 15 healthy people as controls. The task required them to memorize the location of the plant and be able to find it later from other directions (the plant location was not marked). The results of the experiment suggested that patients with amnestic MCI have deficits in the ability to encode and store non-central viewpoint-independent representations. AD patients have specific deficits in storing non-central viewpoint-independent representations and synchronizing them with non-central viewpoint-related representations. Plancher et al. (2012) assessed central and episodic memory of participants based on a virtual active exploration task (as a car driver) and a passive exploration task (as a passenger). The results verified that spatial non-central memory could diagnose amnestic MCI patients.

The emergence of advance VR equipment and technology allowed for a type of spatial immersion termed “telepresence". Howett et al. examined differences in entorhinal cortex-based navigation (Figures 7i) in 45 MCI patients versus 44 healthy controls based on immersive VR integration testing (Howett et al., 2019). Navigation performance correlated with MRI measurements of entorhinal cortex volume. Classification accuracy on a path integration task was compared with cognitive tests in early AD. Performance on the task was evaluated using three outcome metrics, including absolute distance error, scaled angle error, and scaled linear error. Statistical analysis used linear mixed effects models. It turned out that the MCI group showed significantly more errors in the navigation task. Furthermore, the pathway integration task demonstrated higher diagnostic sensitivity and specificity in diagnosing AD patients than the best cognitive tests. Puthusseryppady et al. (2022) further correlated performance on a VR navigation test with performance on community navigation to see if spatial disorientation could be predicted. The tests were divided into three parts. The Virtual Supermarket Test (VST) was a spatial navigation test on an iPad that assesses egocentric orientation, concentric orientation, and heading. Sea Hero Quest (SHQ) was a mobile game that measures the ability to navigate space. The Detour Navigation Test (DNT) was a round-trip route test based on a highly familiar environment. Participants will use egocentric and non-centric navigation strategies during the round trip. Compared with controls, AD patients showed impairments in the VST, SHQ, and DNT. The experiment also reflected the future VR-based diagnostic technology will be applied in everyday electronic products. With the popularity of computerization, the combination of wearable devices, deep learning and VR can provide a low-cost, automated way for disease screening and prediction (Tarnanas et al., 2014). Jiang et al. (2020) developed a 3D maze procedure to assess and train the navigation and cognitive abilities of AD patients and healthy individuals. The program was based on the asynchronous advantage actor-critic algorithm (A3C) to train the agent to simulate the cognitive degradation process of AD patients. And combine the neural network with the pathogenesis of AD patients to reveal the underlying mechanism leading to AD. As shown in Figures 7ii, behavior data in navigation was collected and analyzed through three models. Results showed that patient-mimicking navigation models were inferior to those representing healthy individuals in terms of average number of steps, path efficiency, and decision-evaluation ability.

FIGURE 7
www.frontiersin.org

FIGURE 7. (iA) Participants were asked to walk to positions 1, 2, and 3 (marked) in sequence during the VR task and return to the unmarked 1 by memory. (B) Participants wear VR devices. (C) The participant tried to go back to 1 without the marker (Howett et al., 2019). (iiA) The based A3C navigation network structure (GA3C_LSTM) were used to simulate healthy individuals with normal cognitive levels. (B) Details on the size of each layer. (C) The Noise Navigation Network (GA3C_Noise) Models Cognitively Impaired MCI Patients. (D) The Dememory Navigation Network (GA3C_FF) simulates dementia patients with partial and complete loss of short-term memory (Jiang et al., 2020).

In addition to AD, depression (Gould et al., 2007; Voinescu et al., 2023) and epilepsy (Grewe et al., 2014) can also be diagnosed by VR-based methods through spatial memory and navigation ability. Voinescu et al. (2023) assessed impairment of attention and inhibition in depressed patients via the Nesplora virtual aquarium. Participants were required to pay attention to and respond to the correct stimulus (fish) during the 18-min task. The stimuli included auditory and visual, with the aim of exploring the relationship of attentional deficits to specific sensory processing. The results demonstrated that the depressed patients had omissions and errors in the visual stimuli, but not in the auditory stimuli. And the outcomes of Nesplora aquarium were found to be highly correlated with those of standardized neuropsychological measures. In contrast to the diagnostic approach above, ASD (Cai et al., 2013; Kim et al., 2015) and schizophrenia (Freeman, 2008; Dyck et al., 2010) were diagnosed through VR tasks based on social impairment and emotional blunting. In the experiment by Kim et al. (2015), 42 participants (19 ASD children and 23 healthy controls) used a joystick to adjust the social distance from a virtual object while trying to identify different emotions expressed by the virtual avatar. The VR emotion sensitivity test (V-REST) (Kim et al., 2010) included six emotions, avatars of male and female gender, and four levels of emotional intensity. The observation was that both the ASD children and the control group showed relatively high accuracy in identifying happy emotions. On the other hand, interpersonal distance can be affected by social anxiety (March et al., 1997). Therefore, participants were instructed to use a joystick to adjust the distance to the avatar. Compared with controls, ASD patients showed less tendency to move towards avatars of happy emotions and to a greater extent away from avatars of negative emotions. However, most of the researches on diagnosing ASD, depression, schizophrenia, and epilepsy through VR technology were published around a decade ago (Dechsling et al., 2021). In recent years, the direction of research combining VR and mental diseases has shifted from diagnosis to treatment. Multiple research projects have reported the efficacy of VR-based training (Herrero and Lorenzo, 2020; Dellazizzo et al., 2021; Yen and Chiu, 2021). In addition, augmented reality (AR), as a relatively new concept, provides an effective experimental basis for further understanding and treatment of mental diseases (Lorenzo et al., 2019; Rohrbach et al., 2019).

In essence, VR-based sensors create countless possibilities for immersing participants in virtual tasks, such as navigation, social, and daily activity-related tasks as illustrated in Table 4. Through these virtual tasks, schizophrenia, epilepsy, depression, ASD, and AD can be distinguished from healthy controls. In the task, the computer application can modify the simulated environment according to the responses and actions of participants (Riva et al., 2020). Hence, the interaction with the virtual object can be repeated and adjusted countless times. However, creating engaging VR experiences is a highly complex challenge. The interaction of mainstream VR devices is currently based on handles, which requires participants to invest a certain amount of learning costs. The degree of cooperation of the participants can affect the results of experiment. Overly complex tasks can frustrate participants. Therefore, it is necessary to provide a more convenient spatial interaction mode to control virtual objects through eye control, voice and gestures, etc. In addition, older participants, especially those with AD, were more likely to feel dizzy and nauseated when using VR. Technical limitations and a failure to comprehend user perception and interaction during the design process are the reasons behind these poor experiences (Jerald, 2015). Perhaps blurring the boundaries between VR and AR and allowing users to adjust the degree of virtual and real can lead to a more ideal experience.

TABLE 4
www.frontiersin.org

TABLE 4. Examples of virtual-environment-based sensors towards mental diseases diagnosis. VST, virtual supermarket test. SHQ, sea hero quest. DNT, detour navigation test. A3C, asynchronous advantage actor-critic. GA3C_Noise and GA3C_FF models, models simulating mild cognitive impairment and Alzheimer’s patients, respectively.

3.5 Multiple-signal sensors

In recent years, studies have made significant progress in detecting mental diseases using three biosensors including vision-based, EEG signal, and EOG signal sensors. In addition to the single use of these biosensors, some researchers combine them to achieve higher detection accuracy and efficiency. Multi-signal sensors collect data in the same manner as the sensors described above alone. Eye movement data, EEG and EOG signals were collected through multiple tasks, including dynamic visual tasks, EEG and EEG recording tasks. The features of multiple signals are extracted and classified by machine learning algorithms for higher accuracy.

Based on the underlying characteristics of MCI including impaired visual attention and abnormal EEG rhythm, Jiang et al. designed a rapid and automatic MCI detection approach for primary care (Jiang et al., 2019a). This detection approach included a dynamic visual tracking task and an EEG recording task (Figures 8i). In the visual tracking trial, subjects were asked to gaze at a purple ball moving counterclockwise, following its moving direction and avoiding being disturbed by two other small balls. Eye movement features including the blink time, blink frequency, fixation time, and sustained attention span, were recorded during the trial. The EEG signals were detected and filtered by two dry sensors in the resting state. Forty features (12 eye movement-based features and 28 EEG-based features) were extracted by using linear and nonlinear analysis in the combined EEG and eye movement methods. The features associated with delta and alpha EEG dysrhythmia and impaired visual attention were screened by LR analysis. The final MCI screening model generated by the whole detection procedure had a high accuracy rate of 97.8%. This combined screening model can automatically complete the diagnostic tasks in just 5 min and suit for large-scale disease diagnosis, which is more efficient than the traditional lengthy test methods. Based on the above research, Jiang et al. (2019b) team further developed a Deep Belief Network (DBN) model (Figures 8ii) for more efficient early detection of MCI patients. DBN network consists of one input layer, several hidden layers, and one output layer with the functions of learning features and achieving classification. Feature extraction includes recording neuropsychological test scores and EEG and eye signal features using linear and nonlinear approaches in physiological tests. The collected eye dynamics and EEG-signal-based features were input into the DBN network, which were processed by two hidden layers to become more representative features. The output layer gave the ultimate result of whether the participant has MCI. In this study, the DBN model can select and classify features and simplify features in the hidden layer, which reached an 89.87% accuracy rate of detection results. Early screening for MCI has great significance in clinical application in primary care, which could reduce the number of people with AD or delay the development of AD.

FIGURE 8
www.frontiersin.org

FIGURE 8. (iA) Visual stimulus task. (B) The viewing distance. (C) The experimental procedures for the visual tracking task consisted of 2 min of EEG recording and 3 min of eye movement recording. (D) The blue dots represent the movement of the task ball. The green dots and the red dots represent the locus of fixation distribution in the healthy MCI group, respectively. (E) The AUC of various MCI detection models (Jiang et al., 2019a). (iiA) The experimental procedures consisted of 12 min of testing, 2 min of device calibration, 2 min of EEG recording and 3 min of eye movement recording. (B) The structure of the DBN model. The neuropsychological, EEG, and eye movement feature vectors were input. The features of raw input vectors were learned through RBM1 and RBM2 in order to classify MCI and healthy samples (Jiang et al., 2019b).

Kang et al. (2020) designed a machine-learning approach to identify children with autism by eye-tracking data and EEG data. The subjects were 3 to 6-year-old children. Eye-tracking data were recorded when they gazed at faces of own race and other-race strangers. Eight areas of interest (AOI) including eye, left eye, eye, nose, mouth, whole face, body, and background were selected to quantify the fixation. Power spectrum analysis was used to detect the abnormal rhythm fluctuation of EEG in autistic children. The minimum-redundancy-maximum-relevance (MRMR) feature selection method was used for the feature selection of EEG, and the SVM classifier was combined to classify patients and normal controls. The results showed that the classification pattern combined the features of eye movement tracking and EEG had a higher classification rate than the single pattern, up to 85.44%. This study demonstrated that the multi-modal and multi-feature fusion classification approach provided a promising classification accuracy for future ASD diagnosis.

Many automatic detection methods based on biological signals have been proposed previously, however, their accuracy still needs to be improved. For practical clinical application, Zhu et al. (2019) further proposed a Content-Based Ensemble Model (CBEM) to enhance the accuracy of depression detection. The integrated model was composed of several classifiers. Two different experimental datasets were collected from the free-viewing eye-tracking task and the task-state EEG, respectively, and the stimulation consisted of different emotional faces. The data samples were divided into data subsets according to data types, and then the majority vote of the subsets was used to predict the labels of the subjects. The CBEM model applied to the two experiments separately achieved 82.5% accuracy (eye-tracking) and 92.73 percent accuracy (EEG). In general, CBEM achieved better classification results than traditional classification methods. Ding et al. designed another multi-modal machine learning method to diagnose depression. In a free-gaze task, participants viewed sets of images involving four emotional stimuli (positive, neutral, anxious, and threatening) (Ding et al., 2019). Tobii Eye-tracker 4C was applied to record visual gaze data. Frontal EEG signals were collected by MUSE EEG Headband, during which participants were required to watch eight short videos involving positive, neutral, and anxious emotions. Meanwhile, the Grove-GSR monitor, a biofeedback device, was used to record skin conductance. Galvanic skin response (GSR) is a fluctuation in skin electrical resistance caused by changes in sweat gland activity or the sympathetic nervous system and is often used to detect mood (Nourbakhsh et al., 2012). The features of three types of input data were extracted, and three machine learning algorithms, including RF, SVM, and LR, were used to establish the classification model. The f1 score is the harmonic average of the accuracy and recall of the binary classification analysis. According to the results, the highest f1 classification score was obtained by the LR algorithm, which achieved 79.63% accuracy, 76.67% precision, 85.19% recall, and 80.70% f1 score, respectively. This study integrated three physiological parameters and achieved a high classification score overall, suggesting that multi-modal fusion could improve the performance of the classification model.

The P300 waveform is a neuro-evoked potential component of EEG, which reflects the biological electrical activity generated in response to specific cognitive, sensory or motor events. P300 delay is sensitive towards cognitive impairment and can be used to detect schizophrenia (Zhang Y. et al., 2021c). Blackwood et al. (1994) used P300 auditory ERP and SPEM, two characteristic markers of schizophrenia, to test their association with schizophrenia and functional psychosis. A total of 20 families were recruited for the genetic linkage study, each with at least two diagnosed schizophrenics. Most households were found to have one or two anomalies in their measurements. P300 latency and eye-tracking measurements were normally distributed in schizophrenia patients and controls but bimodal in families. Abnormalities in SPEM and/or ERP occur in family members with schizophrenia, and about half of non-schizophrenic relatives present data abnormalities. This experiment demonstrated the potential role of indicators of psychophysiological abnormalities in genetic research.

In previous studies, traditional memory scales, named Wechsler Memory Scale (WMS), were used to evaluate the memory of patients with epilepsy (Wechsler, 1945). However, such scales failed to separate the effects of visual attention on the evaluation process, which also means that behavioral and associative processing are confused. To address this problem, Zhu et al. (2021) developed an automated computer-based platform for exploring the correlation between memory performance and visual attention in patients with TLE and separated the effects of visual attention on memory tasks by eye-tracker and EEG. The task consisted of a WSM assessment (Digital Span Backward and Visual Recognition tasks), cognitive oculomotor games, and all-day video EEG recordings. After analysis, the results showed that the TLE patients performed worse than normal subjects in the abilities of verbal memory and visual recognition. It was further confirmed that TLE patients spent longer searching for the target in the memory visual-stimulation games. And they also had more visit counts and first fixation time on the AOI during the recall process. Combined with the characteristics of bilateral temporal epileptic discharge (IEDs), the memory performance of TLE patients was negatively correlated with the temporal lobe peak during sleep. In general, patients with TLE have defects in short-term memory. A measuring platform combining a wearable eye-tracker and VEEG can be long-time monitoring of memory performance and visual attention in TLE.

Application examples for multi-signal sensors are listed in Table 5. Through combining multiple non-invasive neural signal recording methods, more mental diseases could be diagnosed. Besides, compared or combined more than two diagnostic methods by machine learning could improve the accuracy. For example, 97.8% accuracy was achieved in the diagnosis of MCI by combining eye-tracking data and EEG signals. Although the screening model integrated by multiple classifiers can complete the diagnosis of MCI in as little as 5 min. Essentially, the data collection process of multi-signal-based sensors is the same as that of a single sensor, still requiring long preparation and collection times and redundant steps. Therefore, the wearable multi-signal integrated sensors with shorter time consumption will have great potential to become a development trend in the future.

TABLE 5
www.frontiersin.org

TABLE 5. Examples of multiple-signal sensors towards mental diseases diagnosis. DBN, deep belief network; CBEM, content-based ensemble model; AOI, areas of interest; ERP, event-related potential.

4 Conclusion and future perspective

In this review, biological pathways to diagnose mental disorders are classified into five categories: visual-based, EEG signal, EOG signal, VR-based and multiple-signal sensors. The vision-based sensors capture the signals of eye movements through eye-tracking devices. Both EEG and EOG signals sensors apply electrode arrays to capture signals. VR-based sensors collect user responses and behaviors by immersing them in a virtual 3D environment. The multiple-signal sensors are a combination of two to three above biosensors.

The vision-based sensors quantify the bias of attention by eye-tracking data. The rationale is to reveal the different attention-shifting mechanisms between patients and controls by comparing the time of gaze and disengagement in stimulus tasks (e.g., pictures, web pages, facial expressions, and communication) in two groups. Currently, eye-tracking technology can identify AD, depression, and ASD, which suggests that vision can be used as a promising biomarker to detect mental diseases. Patients with AD and depression showed a less attentional preference for novel and positive things in the experiments. Depressed patients spent more time disengaging from negative information, and the severity of the disorder was positively correlated with disengagement time. People with ASD tend to focus on details and ignore the whole. The accuracy rate of diagnosing ASD patients in face-to-face communication experiments reached 92.31%, indicating the potential of screening mental diseases in daily communication in the future. The vision-based method has the benefits of being non-invasive, easy to obtain data and wearable. However, a limitation of this method is that the inherent shaking and blinking of the eye make it difficult to extract accurate data. And the movement of the head is easy to cause data interruption. In addition to this, additional preparation time is necessary, e.g., the need for optimal lighting conditions. Patients wearing spectacles or contact lenses not being measured is also a problem and may affect the representativeness of the sample.

In addition to the vision-based method, patients with ASD and depression also can be diagnosed by EEG signals because of abnormal neuronal activity in their brains. And patients with epilepsy and schizophrenia can also be diagnosed by EEG. ASD patients exhibited stronger activation in the frontal, central, parietal, and occipital regions when viewing faces with different emotions than healthy controls, which is related to abnormalities in attention, emotion, and executive function. The acquisition of EEG data from the left and right hemispheres only requires the patient to be at rest for 10 min rather than visual stimulation. Due to the complex, non-stationary and nonlinear characteristics of EEG signals, several computer-aided diagnosis systems (CADS) were proposed to evaluate the data. The performance of CADS directly depends on how features are extracted and classified by machine learning techniques. The complexity of EEG signals could be reduced by fast Fourier transform and wavelet transform, etc. Moreover, machine learning methods, e.g., SVM and KNN can realize automatic classification of epilepsy and depression, with less computation and higher classification accuracy (99.3% ACC). And screening schizophrenia patients by RF classifier achieved 94% accuracy. In addition, two brain diseases (depression and epilepsy) could be diagnosed at the same time, which means the technology to screen for two or more mental disorders at the same time could be available in the future. However, the EEG signal-based method is not nearly perfect. For example, not all patients have abnormal EEG performance at all times. Patients may appear completely normal on short EEG tests, which can lead to errors in experimental results. Long-range EEG measurement is more conducive to detecting abnormal signals. But the disadvantage is that brain waves are easily disturbed by many factors. And the EEG signal of the patient is difficult to detect inactivity. Scalp EEG signal acquisition also faces more difficulties. For example, it is necessary to solve the obstruction caused by the hair so that it can stably collect signals.

EOG is a technique for measuring eye movement and eye position based on changes in retinal resting potential under light and dark adaptation. Wearing the EOG device requires only one silver chloride electrode on the skin of the inner and outer canthus. Compared with other electrical signal detection technologies described above, EOG is lightweight, wearable, and easy to operate. In addition to these advantages, EOG signals can be detected in patients with eye closure, as well as in young uncooperative patients or those with nystagmus. The method for diagnosing schizophrenic patients by EOG signals is based on the abnormal biomarkers of QET and SPEM. However, these methods were proposed about 20 years ago. As machine learning becomes a mainstream statistical tool, it is now more common to diagnose schizophrenia in combination with EEG signals. In contrast, research on the diagnosis of schizophrenia based on EOG signals has hardly progressed in recent years. The results confirmed that schizophrenic patients and their unaffected relatives showed significantly lower pursuit gain, and more patients with left-side extinction showed abnormal SPEM. An interesting finding is that the research progress in diagnosing sleep disorders is inverse to that of schizophrenia. The early diagnosis methods of sleep disorders were mainly based on EEG signals, but the progress is mainly based on EOG signals combined with deep learning in recent years. Sleep disorders are diagnosed by using EOG and machine learning methods to extract features and score different sleep stages. However, the process of detecting EOG requires a long period of dark adaptation and light adaptation, which is limited to lighting conditions. And EOG signals are less sensitive than electromyography (EMG).

By utilizing a range of visual, auditory, tactile, and olfactory stimuli, VR technology allows users to immerse themselves in a virtual environment, offering a unique method for diagnosing mental diseases. Innovative VR-based sensors addressed the challenges of diagnosing AD, ASD, depression, schizophrenia, and epilepsy, and focused on deficits in navigation and spatial memory, daily activities and attention, facial emotion recognition, and social skills. For identified these biomarkers, VR could provide an ideal virtual environment. Compared with other biosensors, VR had the advantages of being reproducible and programmable. In the virtual environment, researchers could easily control variables and repeat experiments, and also allowed the existence of interference or noise. In addition to visual stimuli, the VR-based tests also included auditory stimuli. Even the sense of smell could potentially be used in experiments, which was the advantage that several other sensors do not have. Due to the development of wireless HMDs and mobile electronic devices and wireless communication technology, VR-based 24-h medical monitoring applications would be a common trend. However, VR technology also had some disadvantages. The elderly people were prone to dizziness in the virtual environment, and it was difficult for them to master the use of joysticks, etc. A more ideal spatial interaction experience, such as implanting voice, gesture, and eye controls, or allowing users to adjust the degree of virtuality and reality are possible solutions.

The multiple-signal sensors combine features acquired by a variety of non-invasive neural signal recording methods (vision, EEG, and EOG), and automatically performs the screening of mental diseases through an ensemble model composed of multiple classifiers. This integrated approach can detect a wider range of diseases and avoid the drawbacks of a single approach. For example, biological signals that are not visually detectable can be detected by EEG. In addition, the verification of more than two detection methods can improve the accuracy of diagnosis to a certain extent. However, the greater variety of data collected means patients take longer in the diagnosis process. Therefore, more comfortable and integrated wearable devices will be a big trend in the future.

The results of this review can be utilized in other engineering sciences, e.g., multi-signal biosensors, wearable devices, machine learning, telemedicine, etc. The mental diseases are currently diagnosed based on different biomarkers. The biosensors acquired bio-signals from patients and healthy controls during the stimulation task. Multi-signal sensors based on integrated classifiers provide a more efficient and accurate screening mode. Although the screening model can automatically complete the diagnosis within 5 min, the collection of different data still requires multiple different sensors in different tasks, which makes the participants feel very tedious in the data collection process. In the future, more novel biosensors with higher accuracy, efficiency and integration will be developed. For example, it may be possible to add eye-tracking and EOG signal collection functions to VR/AR head-mounted display devices, so that multiple signals can be collected in one task and one device. In addition, due to the intermittent nature of the onset of mental diseases, a short diagnosis (10 minutes) is not representative of the daily onset of patients. With the promotion of micro biosensor devices, Wearable fabrics, flexible electronics, and other technologies, smaller and more comfortable wearable biosensors will achieve real-time monitoring throughout the day in daily life. In addition to the visual signals, eye signals, brain signals and VR reviewed in this paper, biosensors can also monitor many physiological parameters, e.g., lactic acid and glucose in sweat metabolites, electrocardiogram, and body temperature. A diagnostic approach that combines several biosensors may become mainstream in the future, leading to the diagnosis of a wider range of mental disorders. Combining deep learning and artificial intelligence methods, feature extraction and classification of physical and biochemical data of the human body are implemented to achieve automated and large-scale diagnosis and new technology for simultaneous screening of multiple mental diseases. Diagnosis methods based on EEG signals and multi-signals are more commonly combined with machine learning in recent years, including algorithms such as SVM, KNN, RF, and LR, etc., which provide a higher accuracy rate for the screening of mental diseases. Currently, all of the above diagnostic approaches are based in a laboratory setting. The development of radio communication technology has greatly supported the diagnostic functions associated with medical devices. Data can be sent directly from the medical testing device to a remote health center or hospital. Doctors will be able to conduct remote consultations and guidance through virtual reality technology, and patients wearing testing devices will be able to put on their own devices and perform the diagnosis process at home.

Author contributions

YZ: Investigation, Writing—Original draft and editing, Visualization, Software. CL: Investigation, Writing—Original draft and editing, Software. NL and QW: Writing–Reviewing and Proofreading. QX: Data Curation. SZ and XS: Conceptualization, Supervision, Funding acquisition. All authors contributed to the article and approved the submitted version.

Funding

This review was supported by 2025 Key Technological Innovation Program of Ningbo City under Grant (No. 2022Z080), Ningbo Scientific and Technological Innovation 2025 Major Project (No. 2021Z108), and Yongjiang Talent Introduction Programme (No. 2021A-154-G).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abhang, P. A., Gawali, B., and Mehrotra, S. (2016). Introduction to EEG-and speech-based emotion recognition. Academic Press.

Google Scholar

Acharya, J. N., Hani, A. J., Cheek, J., Thirumala, P., and Tsuchida, T. N. J. T. N. J. (2016). American clinical neurophysiology society guideline 2: Guidelines for standard electrode position nomenclature. Neurodiagn. J. 56 (4), 245–252. doi:10.1080/21646821.2016.1245558

PubMed Abstract | CrossRef Full Text | Google Scholar

Akbari, H., and Esmaili, S. (2020). A novel geometrical method for discrimination of normal, interictal and ictal EEG signals. Trait. Du. Signal 37 (1), 59–68. doi:10.18280/ts.370108

CrossRef Full Text | Google Scholar

Akbari, H., Ghofrani, S., Zakalvand, P., and Sadiq, M. T. (2021c). Schizophrenia recognition based on the phase space dynamic of EEG signals and graphical features. Biomed. Signal Process. Control 69, 102917. doi:10.1016/j.bspc.2021.102917

CrossRef Full Text | Google Scholar

Akbari, H., Sadiq, M. T., and Rehman, A. U. (2021b). Classification of normal and depressed EEG signals based on centered correntropy of rhythms in empirical wavelet transform domain. Health Inf. Sci. Syst. 9 (1), 9. doi:10.1007/s13755-021-00139-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Akbari, H., Sadiq, M. T., Ur Rehman, A., Ghazvini, M., Naqvi, R. A., Payan, M., et al. (2021a). Depression recognition based on the reconstruction of phase space of EEG signals and geometrical features. Applied Acoustics, 179.

Google Scholar

Allain, P., Foloppe, D. A., Besnard, J., Yamaguchi, T., Etcharry-Bouyx, F., Le Gall, D., et al. (2014). Detecting everyday action deficits in Alzheimer’s disease using a nonimmersive virtual reality kitchen. J. Int. Neuropsychological Soc. 20 (5), 468–477. doi:10.1017/s1355617714000344

CrossRef Full Text | Google Scholar

Amaral, C., Mouga, S., Simões, M., Pereira, H. C., Bernardino, I., Quental, H., et al. (2018). A feasibility clinical trial to improve social attention in autistic spectrum disorder (ASD) using a brain computer interface. Front. Neurosci. 12, 477. doi:10.3389/fnins.2018.00477

PubMed Abstract | CrossRef Full Text | Google Scholar

Andreasen, N. J. T. L. (1995). Symptoms, signs, and diagnosis of schizophrenia. Lancet 346 (8973), 477–481. doi:10.1016/s0140-6736(95)91325-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Anthes, C., García-Hernández, R. J., Wiedemann, M., and Kranzlmüller, D. (Editors) (2016). State of the art of virtual reality technology (IEEE aerospace conference; 2016: IEEE).

Google Scholar

Ashraf, H., Sodergren, M. H., Merali, N., Mylonas, G., Singh, H., and Darzi, A. (2018). Eye-tracking technology in medical education: A systematic review. Med. Teach. 40 (1), 62–69. doi:10.1080/0142159x.2017.1391373

PubMed Abstract | CrossRef Full Text | Google Scholar

Belleville, S., Chertkow, H., and Gauthier, S. J. N. (2007). Working memory and control of attention in persons with Alzheimer's disease and mild cognitive impairment. Neuropsychology 21 (4), 458–469. doi:10.1037/0894-4105.21.4.458

PubMed Abstract | CrossRef Full Text | Google Scholar

Black, M. H., Chen, N. T. M., Iyer, K. K., Lipp, O. V., Bolte, S., Falkmer, M., et al. (2017). Mechanisms of facial emotion recognition in autism spectrum disorders: Insights from eye tracking and electroencephalography. Neurosci. Biobehav Rev. 80, 488–515. doi:10.1016/j.neubiorev.2017.06.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Blackwood, D., Muir, W., Roxborough, H., Walker, M., Townshend, R., Glabus, M., et al. (1994). Schizoid personality in childhood: Auditory P300 and eye tracking responses at follow-up in adult life. J. Autism Dev. Disord. 24 (4), 487–500. doi:10.1007/bf02172130

PubMed Abstract | CrossRef Full Text | Google Scholar

Bo, H-X., Li, W., Yang, Y., Wang, Y., Zhang, Q., Cheung, T., et al. (2021). Posttraumatic stress symptoms and attitude toward crisis mental health services among clinically stable patients with COVID-19 in China. Psychol. Med. 51 (6), 1052–1053. doi:10.1017/s0033291720000999

PubMed Abstract | CrossRef Full Text | Google Scholar

Bocquillon, P., Dujardin, K., Betrouni, N., Phalempin, V., Houdayer, E., Bourriez, J. L., et al. (2009). Attention impairment in temporal lobe epilepsy: A neurophysiological approach via analysis of the P300 wave. Hum. Brain Mapp. 30 (7), 2267–2277. doi:10.1002/hbm.20666

PubMed Abstract | CrossRef Full Text | Google Scholar

Bulling, A., Ward, J. A., Gellersen, H., and Troster, G. (2011). Eye movement analysis for activity recognition using electrooculography. IEEE Trans. Pattern Anal. Mach. Intell. 33 (4), 741–753. doi:10.1109/tpami.2010.86

PubMed Abstract | CrossRef Full Text | Google Scholar

Burdea, G. C., and Coiffet, P. (2003). Virtual reality technology. John Wiley & Sons.

Google Scholar

Busfield, J. (2011). Mental illness. Polity.

Google Scholar

Cai, Q., Gao, Z., An, J., Gao, S., and Grebogi, C. (2020). A graph-temporal fused dual-input convolutional neural network for detecting sleep stages from EEG signals. IEEE Trans. Circuits Syst. II Express Briefs. 68 (2), 777–781. doi:10.1109/tcsii.2020.3014514

CrossRef Full Text | Google Scholar

Cai, Y., Chia, N. K., Thalmann, D., Kee, N. K., Zheng, J., and Thalmann, N. M. (2013). Design and development of a virtual dolphinarium for children with autism. IEEE Trans. neural Syst. rehabilitation Eng. 21 (2), 208–217. doi:10.1109/tnsre.2013.2240700

CrossRef Full Text | Google Scholar

Carskadon, M. A., and Dement, W. (2005). “Medicine pos. Normal human sleep: an overview,” in Principles and practice of sleep medicine 4 (1), 16–26.

Google Scholar

Carter, B. T., and Luke, S. G. (2020). Best practices in eye tracking research. Int. J. Psychophysiol. 155, 49–62. doi:10.1016/j.ijpsycho.2020.05.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Chau, S. A., Herrmann, N., Sherman, C., Chung, J., Eizenman, M., Kiss, A., et al. (2017). Visual selective attention toward novel stimuli predicts cognitive decline in alzheimer's disease patients. J. Alzheimers Dis. 55 (4), 1339–1349. doi:10.3233/jad-160641

PubMed Abstract | CrossRef Full Text | Google Scholar

Chi, Y. M., Wang, Y. T., Wang, Y., Maier, C., Jung, T. P., and Cauwenberghs, G. (2012). Dry and noncontact EEG sensors for mobile brain-computer interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 20 (2), 228–235. doi:10.1109/tnsre.2011.2174652

PubMed Abstract | CrossRef Full Text | Google Scholar

Chisholm, D., Sweeny, K., Sheehan, P., Rasmussen, B., Smit, F., Cuijpers, P., et al. (2016). Scaling-up treatment of depression and anxiety: A global return on investment analysis. Lancet Psychiatry 3 (5), 415–424. doi:10.1016/s2215-0366(16)30024-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Climent, G., Rodríguez, C., García, T., Areces, D., Mejías, M., Aierbe, A., et al. (2021). New virtual reality tool (Nesplora aquarium) for assessing attention and working memory in adults: A normative study. Appl. Neuropsychol. Adult 28 (4), 403–415. doi:10.1080/23279095.2019.1646745

PubMed Abstract | CrossRef Full Text | Google Scholar

Coughlan, G., Laczó, J., Hort, J., Minihane, A., and Hornberger, M. (2018b). Spatial navigation deficits–the overlooked cognitive fingerprint for incipient Alzheimer pathophysiology. Nat. Rev. Neurol. 14 (8), 496–506. doi:10.1038/s41582-018-0031-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Coughlan, G., Laczó, J., Hort, J., Minihane, A-M., and Hornberger, M. (2018a). Spatial navigation deficits—Overlooked cognitive marker for preclinical alzheimer disease? Nat. Rev. Neurol. 14 (8), 496–506. doi:10.1038/s41582-018-0031-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Creel, D. J. (2019). The electrooculogram. Handb. Clin. Neurol. 160, 495–499. doi:10.1016/B978-0-444-64032-1.00033-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Dauwels, J., Vialatte, F., and Cichocki, A. J. C. A. R. (2010). Diagnosis of alzheimer's disease from EEG signals: Where are we standing? Curr. Alzheimer Res. 7 (6), 1–19. doi:10.2174/1567210204558652050

PubMed Abstract | CrossRef Full Text | Google Scholar

Dechsling, A., Orm, S., Kalandadze, T., Sütterlin, S., Øien, R. A., Shic, F., et al. (2021). Virtual and augmented reality in social skills interventions for individuals with autism spectrum disorder: A scoping review. J. autism Dev. Disord. 52, 4692–4707. doi:10.1007/s10803-021-05338-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Dellazizzo, L., Potvin, S., Phraxayavong, K., and Dumais, A. (2021). One-year randomized trial comparing virtual reality-assisted therapy to cognitive–behavioral therapy for patients with treatment-resistant schizophrenia. npj Schizophr. 7 (1), 9. doi:10.1038/s41537-021-00139-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Ding, X., Yue, X., Zheng, R., Bi, C., Li, D., and Yao, G. (2019). Classifying major depression patients and healthy controls using EEG, eye tracking and galvanic skin response data. J. Affect Disord. 251, 156–161. doi:10.1016/j.jad.2019.03.058

PubMed Abstract | CrossRef Full Text | Google Scholar

Duan, F., Huang, Z., Sun, Z., Zhang, Y., Zhao, Q., Cichocki, A., et al. (2020). Topological network analysis of early Alzheimer’s disease based on resting-state EEG. IEEE Trans. Neural Syst. Rehabilitation Eng. 28 (10), 2164–2172. doi:10.1109/tnsre.2020.3014951

CrossRef Full Text | Google Scholar

Dyck, M., Winbeck, M., Leiberg, S., Chen, Y., and Mathiak, K. (2010). Virtual faces as a tool to study emotion recognition deficits in schizophrenia. Psychiatry Res. 179 (3), 247–252. doi:10.1016/j.psychres.2009.11.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Edition FJAPA. Diagnostic and statistical manual of mental disorders. 2013;21(21):591–643.

Google Scholar

Ferrari, G. R., Mobius, M., van Opdorp, A., Becker, E. S., and Rinck, M. (2016). Can't look away: An eye-tracking based attentional disengagement training for depression. Cogn. Ther. Res. 40 (5), 672–686. doi:10.1007/s10608-016-9766-0

CrossRef Full Text | Google Scholar

Ford, J. M., Palzes, V. A., Roach, B. J., and Mathalon, D. H. (2014). Did I do that? Abnormal predictive processes in schizophrenia when button pressing to deliver a tone. Schizophr. Bull. 40 (4), 804–812. doi:10.1093/schbul/sbt072

PubMed Abstract | CrossRef Full Text | Google Scholar

Freeman, D. (2008). Studying and treating schizophrenia using virtual reality: A new paradigm. Schizophr. Bull. 34 (4), 605–610. doi:10.1093/schbul/sbn020

PubMed Abstract | CrossRef Full Text | Google Scholar

Freeman, M. (2022). World mental health report: Transforming mental health for all. World Psychiatry 21 (3), 391–392. doi:10.1002/wps.21018

PubMed Abstract | CrossRef Full Text | Google Scholar

García-Betances, R. I., Arredondo Waldmeyer, M. T., Fico, G., and Cabrera-Umpiérrez, M. F. (2015). A succinct overview of virtual reality technology use in Alzheimer’s disease. Front. aging Neurosci. 7, 80. doi:10.3389/fnagi.2015.00080

PubMed Abstract | CrossRef Full Text | Google Scholar

Gauthier, S., Reisberg, B., Zaudig, M., Petersen, R. C., Ritchie, K., Broich, K., et al. (2006). Mild cognitive impairment. Mild Cogn. Impair. 367 (9518), 1262–1270. doi:10.1016/s0140-6736(06)68542-5

CrossRef Full Text | Google Scholar

Goharinejad, S., Goharinejad, S., Hajesmaeel-Gohari, S., and Bahaadinbeigy, K. (2022). The usefulness of virtual, augmented, and mixed reality technologies in the diagnosis and treatment of attention deficit hyperactivity disorder in children: An overview of relevant studies. BMC psychiatry 22 (1), 4–13. doi:10.1186/s12888-021-03632-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Gotardi, G. C., Rodrigues, S. T., Barbieri, F. A., Brito, M. B., Bonfim, J. V. A., and Polastri, P. F. (2020). Wearing a head-mounted eye tracker may reduce body sway. Neurosci. Lett. 722, 134799. doi:10.1016/j.neulet.2020.134799

PubMed Abstract | CrossRef Full Text | Google Scholar

Gould, N. F., Holmes, M. K., Fantie, B. D., Luckenbaugh, D. A., Pine, D. S., Gould, T. D., et al. (2007). Performance on a virtual reality spatial memory navigation task in depressed patients. Am. J. Psychiatry 164 (3), 516–519. doi:10.1176/ajp.2007.164.3.516

PubMed Abstract | CrossRef Full Text | Google Scholar

Grewe, P., Lahr, D., Kohsik, A., Dyck, E., Markowitsch, H. J., Bien, C., et al. (2014). Real-life memory and spatial navigation in patients with focal epilepsy: Ecological validity of a virtual reality supermarket task. Epilepsy & Behav. 31, 57–66. doi:10.1016/j.yebeh.2013.11.014

CrossRef Full Text | Google Scholar

Guo, S., Wu, K., Li, C., Wang, H., Sun, Z., Xi, D., et al. (2021). Integrated contact lens sensor system based on multifunctional ultrathin MoS2 transistors. Matter 4 (3), 969–985. doi:10.1016/j.matt.2020.12.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Guo, S., Yang, D., Zhang, S., Dong, Q., Li, B., Tran, N., et al. (2019). Development of a cloud-based epidermal MoSe2 device for hazardous gas sensing. Adv. Funct. Mater. 29 (18), 1900138. doi:10.1002/adfm.201900138

CrossRef Full Text | Google Scholar

Herrero, J. F., and Lorenzo, G. (2020). An immersive virtual reality educational intervention on people with autism spectrum disorders (ASD) for the development of communication skills and problem solving. Educ. Inf. Technol. 25, 1689–1722. doi:10.1007/s10639-019-10050-0

CrossRef Full Text | Google Scholar

Hidalgo-Mazzei, D., Young, A. H., Vieta, E., and Colom, F. (2018). Behavioural biomarkers and mobile mental health: A new paradigm. Int. J. bipolar Disord. 6 (1), 9–4. doi:10.1186/s40345-018-0119-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Holzman, P. S., Proctor, L. R., Levy, D. L., Yasillo, N. J., Meltzer, H. Y., and Hurt Swjaogp, (1974). Eye-tracking dysfunctions in schizophrenic patients and their relatives. Eye-tracking dysfunctions schizophrenic patients their Relat. 31 (2), 143–151. doi:10.1001/archpsyc.1974.01760140005001

CrossRef Full Text | Google Scholar

Howett, D., Castegnaro, A., Krzywicka, K., Hagman, J., Marchment, D., Henson, R., et al. (2019). Differentiation of mild cognitive impairment using an entorhinal cortex-based test of virtual reality navigation. Brain 142 (6), 1751–1766. doi:10.1093/brain/awz116

PubMed Abstract | CrossRef Full Text | Google Scholar

Huang, Y., Xu, J., Liu, J., Wang, X., and Chen, B. (2017). Disease-related detection with electrochemical biosensors: A review. Sensors (Basel). 17 (10), 2375. doi:10.3390/s17102375

PubMed Abstract | CrossRef Full Text | Google Scholar

Hutton, S. (2019). Eye tracking methodology. Eye movement research. Springer, 277–308.

CrossRef Full Text | Google Scholar

Hyman, S., Chisholm, D., Kessler, R., Patel, V., and Whiteford, H. (2006). Mental disorders. Disease control priorities related to mental, neurological, developmental and substance abuse disorders, 1–20.

Google Scholar

Ibrahim, S., Djemal, R., Alsuwailem, A. J. B., and Engineering, B. (2018). Electroencephalography (EEG) signal processing for epilepsy and autism spectrum disorder diagnosis. Biocybern. Biomed. Eng. 38 (1), 16–26. doi:10.1016/j.bbe.2017.08.006

CrossRef Full Text | Google Scholar

Jerald, J. (2015). The VR book: Human-centered design for virtual reality. Morgan & Claypool.

Google Scholar

Jiang, J., Yan, Z., Shen, T., Xu, G., Guan, Q., and Yu, Z. (2019b). Use of deep Belief network model to discriminate mild cognitive impairment and normal controls based on EEG, eye movement signals and neuropsychological tests. J. Med. Imaging Health Inf. 9 (9), 1978–1985. doi:10.1166/jmihi.2019.2825

CrossRef Full Text | Google Scholar

Jiang, J., Yan, Z., Sheng, C., Wang, M., Guan, Q., Yu, Z., et al. (2019a). A novel detection tool for mild cognitive impairment patients based on eye movement and electroencephalogram. J. Alzheimers Dis. 72 (2), 389–399. doi:10.3233/jad-190628

CrossRef Full Text | Google Scholar

Jiang, J., Zhai, G., and Jiang, Z., editors. Modeling the self-navigation behavior of patients with Alzheimer’s disease in virtual reality. VR/AR and 3D displays: First international conference, ICVRD 2020, Hangzhou, China, 2020, Revised selected papers 1; 2021: Springer.

Google Scholar

Kang, J., Han, X., Song, J., Niu, Z., and Li, X. (2020). The identification of children with autism spectrum disorder by SVM approach on EEG and eye-tracking data. Comput. Biol. Med. 120, 103722. doi:10.1016/j.compbiomed.2020.103722

PubMed Abstract | CrossRef Full Text | Google Scholar

Kapp, S., Barz, M., Mukhametov, S., Sonntag, D., and Kuhn, J. (2021). Arett: Augmented reality eye tracking toolkit for head mounted displays. Sensors (Basel). 21 (6), 2234. doi:10.3390/s21062234

PubMed Abstract | CrossRef Full Text | Google Scholar

Kathmann, N., Hochrein, A., Uwer, R., and BjajoP, B. (2003). Deficits in gain of smooth pursuit eye movements in schizophrenia and affective disorder patients and their unaffected relatives. Am. J. Psychiatry 160 (4), 696–702. doi:10.1176/appi.ajp.160.4.696

PubMed Abstract | CrossRef Full Text | Google Scholar

Kaur, A. (2021). Wheelchair control for disabled patients using EMG/EOG based human machine interface: A review. J. Med. Eng. Technol. 45 (1), 61–74. doi:10.1080/03091902.2020.1853838

PubMed Abstract | CrossRef Full Text | Google Scholar

Kennedy, A. (2016). Book review: Eye tracking: A comprehensive guide to methods and measures. Q. J. Exp. Psychol. 69 (3), 607–609. doi:10.1080/17470218.2015.1098709

CrossRef Full Text | Google Scholar

Khare, S. K., and Bajaj, V. (2020). An evolutionary optimized variational mode decomposition for emotion recognition. IEEE Sensors J. 21 (2), 2035–2042. doi:10.1109/jsen.2020.3020915

CrossRef Full Text | Google Scholar

Khare, S. K., Bajaj, V., and Sinha, G. R. (2020). Adaptive tunable Q wavelet transform-based emotion identification. IEEE Trans. Instrum. Meas. 69 (12), 9609–9617. doi:10.1109/tim.2020.3006611

CrossRef Full Text | Google Scholar

Kim, J-H., Thang, N. D., and Kim, T-S. (Editors) (2009). 3-D hand motion tracking and gesture recognition using a data glove (IEEE International Symposium on Industrial Electronics; 2009: IEEE).

Google Scholar

Kim, K., Geiger, P., Herr, N., and Rosenthal, M. (Editors) (2010). The virtual reality emotion sensitivity test (V-REST): Development and construct validity (San Francisco, CA: Association for Behavioral and Cognitive Therapies (ABCT) conference). (November 18–21, 2010).

Google Scholar

Kim, K., Rosenthal, M. Z., Gwaltney, M., Jarrold, W., Hatt, N., McIntyre, N., et al. (2015). A virtual joy-stick study of emotional responses and social motivation in children with autism spectrum disorder. J. autism Dev. Disord. 45, 3891–3899. doi:10.1007/s10803-014-2036-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Klawohn, J., Bruchnak, A., Burani, K., Meyer, A., Lazarov, A., Bar-Haim, Y., et al. (2020). Aberrant attentional bias to sad faces in depression and the role of stressful life events: Evidence from an eye-tracking paradigm. Behav. Res. Ther. 135, 103762. doi:10.1016/j.brat.2020.103762

PubMed Abstract | CrossRef Full Text | Google Scholar

LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521 (7553), 436–444. doi:10.1038/nature14539

PubMed Abstract | CrossRef Full Text | Google Scholar

Lencer, R., Malchow, C. P., Trillenberg-Krecker, K., Schwinger, E., and Arolt, V. J. B. P. (2000). Eye-tracking dysfunction (ETD) in families with sporadic and familial schizophrenia. Biol. Psychiatry 47 (5), 391–401. doi:10.1016/s0006-3223(99)00249-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Levinoff, E. J., Li, K. Z., Murtha, S., and Chertkow, H. J. N. (2004). Selective attention impairments in alzheimer's disease: Evidence for dissociable components. Neuropsychology 18 (3), 580–588. doi:10.1037/0894-4105.18.3.580

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, W., Yang, Y., Liu, Z-H., Zhao, Y-J., Zhang, Q., Zhang, L., et al. (2020). Progression of mental health services during the COVID-19 outbreak in China. Int. J. Biol. Sci. 16 (10), 1732–1738. doi:10.7150/ijbs.45120

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, C., Zhang, B., Chen, W., Liu, W., and Zhang, S. (2021). Current development of wearable sensors based on nanosheets and applications. TrAC Trends Anal. Chem. 143, 116334. doi:10.1016/j.trac.2021.116334

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, N. H., Chiang, C. Y., and Chu, H. C. (2013). Recognizing the degree of human attention using EEG signals from mobile sensors. Sensors (Basel) 13 (8), 10273–10286. doi:10.3390/s130810273

PubMed Abstract | CrossRef Full Text | Google Scholar

Lorenzo, G., Gómez-Puerta, M., Arráez-Vera, G., and Lorenzo-Lledó, A. (2019). Preliminary study of augmented reality as an instrument for improvement of social skills in children with autism spectrum disorder. Educ. Inf. Technol. 24, 181–204. doi:10.1007/s10639-018-9768-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Lu, S., Xu, J., Li, M., Xue, J., Lu, X., Feng, L., et al. (2017). Attentional bias scores in patients with depression and effects of age: A controlled, eye-tracking study. J. Int. Med. Res. 45 (5), 1518–1527. doi:10.1177/0300060517708920

PubMed Abstract | CrossRef Full Text | Google Scholar

Majaranta, P., and Bulling, A. (2014). Eye tracking and eye-based human–computer interaction. Advances in Physiological Computing. Human–Computer Interaction Series. 39–65.

PubMed Abstract | CrossRef Full Text | Google Scholar

March, J. S., Parker, J. D., Sullivan, K., Stallings, P., and Conners, C. K. (1997). The multidimensional anxiety scale for children (MASC): Factor structure, reliability, and validity. J. Am. Acad. child Adolesc. psychiatry 36 (4), 554–565. doi:10.1097/00004583-199704000-00019

PubMed Abstract | CrossRef Full Text | Google Scholar

Matsue, Y., Osakabe, K., Saito, H., Goto, Y., Ueno, T., Matsuoka, H., et al. (1994). Smooth pursuit eye movements and express saccades in schizophrenic patients. Schizophr. Res. 12 (2), 121–130. doi:10.1016/0920-9964(94)90069-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Mattson, M. P. J. N. (2004). Pathways towards and away from Alzheimer's disease. Nature 430 (7000), 631–639. doi:10.1038/nature02621

PubMed Abstract | CrossRef Full Text | Google Scholar

Mayeux, R. (2004). Biomarkers: Potential uses and limitations. NeuroRx 1, 182–188. doi:10.1602/neurorx.1.2.182

PubMed Abstract | CrossRef Full Text | Google Scholar

Meek, P. D., McKeithan, E. K., GtjptjoHP, S., and Therapy, D. (1998). Economic considerations in alzheimer's disease. Econ. considerations Alzheimer's Dis. 18 (2P2), 68–73. doi:10.1002/j.1875-9114.1998.tb03880.x

CrossRef Full Text | Google Scholar

Mehrfard, A., Fotouhi, J., Taylor, G., Forster, T., Navab, N., and Fuerst, B. (2019). A comparative analysis of virtual reality head-mounted display systems. arXiv preprint arXiv:191202913.

Google Scholar

Möricke, E., Buitelaar, J. K., NnjjoA, R., and Disorders, D. (2016). Do we need multiple informants when assessing autistic traits? The degree of report bias on offspring, self, and spouse ratings. J. Autism Dev. Disord. 46 (1), 164–175. doi:10.1007/s10803-015-2562-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Nourbakhsh, N., Wang, Y., Chen, F., and Calvo, R. A. (Editors) (2012). “Using galvanic skin response for cognitive load measurement in arithmetic and reading tasks,” Proceedings of the 24th australian computer-human interaction conference.

CrossRef Full Text | Google Scholar

Organization, W. H. (2017). Depression and other common mental disorders: Global health estimates. World Health Organization.

Google Scholar

Paula, C. A. R., Reategui, C., Costa, B. K. S., da Fonseca, C. Q., da Silva, L., Morya, E., et al. (2017). High-frequency EEG variations in children with autism spectrum disorder during human faces visualization. Biomed. Res. Int. 2017, 1–11. doi:10.1155/2017/3591914

CrossRef Full Text | Google Scholar

Pereira, A., Reisgo, R. S., and Wanger, M. B. (2007). Autismo infantil: Tradução e validação da CARS (Childhood Autism Rating Scale) para uso no Brasil. Artig. Orig. 84. doi:10.1590/S0021-75572008000700004

CrossRef Full Text | Google Scholar

Perry, R. J., and Hodges, J. R. J. B. (1999). Attention and executive deficits in alzheimer's disease: A critical review. Brain 122 (3), 383–404. doi:10.1093/brain/122.3.383

CrossRef Full Text | Google Scholar

Plancher, G., Tirard, A., Gyselinck, V., Nicolas, S., and Piolino, P. (2012). Using virtual reality to characterize episodic memory profiles in amnestic mild cognitive impairment and alzheimer's disease: Influence of active and passive encoding. Neuropsychologia 50 (5), 592–602. doi:10.1016/j.neuropsychologia.2011.12.013

CrossRef Full Text | Google Scholar

Plitt, M., Barnes, K. A., and Martin, A. J. N. C. (2015). Functional connectivity classification of autism identifies highly predictive brain features but falls short of biomarker standards. Neuroimage Clin. 7, 359–366. doi:10.1016/j.nicl.2014.12.013

PubMed Abstract | CrossRef Full Text | Google Scholar

Psychiatry CSo (2020). Expert consensus on managing pathway and coping strategies for patients with mental disorders during prevention and control of infectious disease outbreak. In press.

Google Scholar

Puthusseryppady, V., Emrich-Mills, L., Lowry, E., Patel, M., and Hornberger, M. (2020). Spatial disorientation in alzheimer's disease: The missing path from virtual reality to real world. Front. Aging Neurosci. 12, 550514. doi:10.3389/fnagi.2020.550514

PubMed Abstract | CrossRef Full Text | Google Scholar

Puthusseryppady, V., Morrissey, S., Spiers, H., Patel, M., and Hornberger, M. (2022). Predicting real world spatial disorientation in Alzheimer’s disease patients using virtual reality navigation tests. Sci. Rep. 12 (1), 13397. doi:10.1038/s41598-022-17634-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Racine, N., Cooke, J. E., Eirich, R., Korczak, D. J., McArthur, B., and Madigan, S. (2020). Child and adolescent mental illness during COVID-19: A rapid review. Psychiatry Res. 292, 113307. doi:10.1016/j.psychres.2020.113307

PubMed Abstract | CrossRef Full Text | Google Scholar

Rahman, M. M., Bhuiyan, M. I. H., and Hassan, A. R. (2018). Sleep stage classification using single-channel EOG. Comput. Biol. Med. 102, 211–220. doi:10.1016/j.compbiomed.2018.08.022

PubMed Abstract | CrossRef Full Text | Google Scholar

Rangel-Gomez, M., and Mjjop, M. (2016). Neurotransmitters and novelty: A systematic review. Neurotransmitters Nov. a Syst. Rev. 30 (1), 3–12. doi:10.1177/0269881115612238

CrossRef Full Text | Google Scholar

Rayner, K. (2009). The 35th Sir Frederick Bartlett Lecture: Eye movements and attention in reading, scene perception, and visual search. Q. J. Exp. Psychol. (Hove) 62 (8), 1457–1506. doi:10.1080/17470210902816461

PubMed Abstract | CrossRef Full Text | Google Scholar

Rechtschaffen AJBis (1968). A manual for standardized terminology, techniques and scoring system for sleep stages in human subjects.

Google Scholar

Riva, G., Malighetti, C., Chirico, A., Di Lernia, D., Mantovani, F., and Dakanalis, A. (2020). Rehabilitation interventions in the patient with obesity, 189–204.Virtual reality

CrossRef Full Text | Google Scholar

Rohrbach, N., Gulde, P., Armstrong, A. R., Hartig, L., Abdelrazeq, A., Schröder, S., et al. (2019). An augmented reality approach for ADL support in Alzheimer’s disease: A crossover trial. J. neuroengineering rehabilitation 16, 66–11. doi:10.1186/s12984-019-0530-z

CrossRef Full Text | Google Scholar

Römhild, J., Fleischer, S., Meyer, G., Stephan, A., Zwakhalen, S., Leino-Kilpi, H., et al. (2018). Inter-rater agreement of the quality of life-alzheimer’s disease (QoL-AD) self-rating and proxy rating scale: Secondary analysis of RightTimePlaceCare data. Health Qual. Life Outcomes 16 (1), 131–213. doi:10.1186/s12955-018-0959-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Sachadev, J. S., Bhatnagar, R. J. M. I., and Intelligence, B. U. A. (2022). A comprehensive review on brain disease mapping—the underlying technologies and AI based techniques for feature extraction and classification using EEG signals, 73–91.

CrossRef Full Text | Google Scholar

Sadiq, M. T., Akbari, H., Siuly, S., Yousaf, A., and Rehman, A. U. (2021). A novel computer-aided diagnosis framework for EEG-based identification of neural diseases. Comput. Biol. Med. 138, 104922. doi:10.1016/j.compbiomed.2021.104922

PubMed Abstract | CrossRef Full Text | Google Scholar

Sadiq, M. T., Yu, X., Yuan, Z., and Aziz, M. Z. J. S. (2020). Identification of motor and mental imagery EEG in two and multiclass subject-dependent tasks using successive decomposition index. Sensors (Basel). 20 (18), 5283. doi:10.3390/s20185283

PubMed Abstract | CrossRef Full Text | Google Scholar

Sanchez, A., Vazquez, C., Marker, C., LeMoult, J., and Joormann, J. (2013). Attentional disengagement predicts stress recovery in depression: An eye-tracking study. J. Abnorm Psychol. 122 (2), 303–313. doi:10.1037/a0031529

PubMed Abstract | CrossRef Full Text | Google Scholar

Scarone, S., Gambini, O., Häfele, E., Bellodi, L., and Ejbp, S. (1987). Neurofunctional assessment of schizophrenia: A preliminary investigation of the presence of eye-tracking (spems) and quality extinction test (QET) abno. Biol. Psychol. 24 (3), 253–259. doi:10.1016/0301-0511(87)90006-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Scarone, S., Gambini, O., Pieri Ejpf-H, J. G., and Laterality, E. A. (1983). Dominant hemisphere dysfunction in chronic schizophrenia: Schwartz test and short aphasia screening test.

Google Scholar

Scarone, S., Pieri, E., Gambini, O., Massironi, R., and CljtbjoP, C. (1982). The asymmetric lateralization of tactile extinction in schizophrenia: The possible role of limbic and frontal regions. Br. J. Psychiatry 141 (4), 350–353. doi:10.1192/bjp.141.4.350

CrossRef Full Text | Google Scholar

Schmidt, H. D., Shelton, R. C., and Duman, R. S. (2011). Functional biomarkers of depression: Diagnosis, treatment, and pathophysiology. Neuropsychopharmacology 36 (12), 2375–2394. doi:10.1038/npp.2011.151

CrossRef Full Text | Google Scholar

Serino, S., Morganti, F., Di Stefano, F., and Riva, G. (2015). Detecting early egocentric and allocentric impairments deficits in Alzheimer’s disease: An experimental study with virtual reality. Front. aging Neurosci. 7, 88. doi:10.3389/fnagi.2015.00088

CrossRef Full Text | Google Scholar

Sharma, M., Achuth, P., Deb, D., Puthankattil, S. D., and Acharya, U. R. J. C. S. R. (2018). An automated diagnosis of depression using three-channel bandwidth-duration localized wavelet filter bank with EEG signals. Cogn. Syst. Res. 52, 508–520. doi:10.1016/j.cogsys.2018.07.010

CrossRef Full Text | Google Scholar

Sharma, M., Darji, J., Thakrar, M., and Acharya, U. R. (2022). Automated identification of sleep disorders using wavelet-based features extracted from electrooculogram and electromyogram signals. Comput. Biol. Med. 143, 105224. doi:10.1016/j.compbiomed.2022.105224

PubMed Abstract | CrossRef Full Text | Google Scholar

Siever, L. J., RdjjoN, C., and Disease, M. (1985). Biological markers for schizophrenia and the biological high-risk approach.

Google Scholar

Silber, M. H., Ancoli-Israel, S., Bonnet, M. H., Chokroverty, S., Grigg-Damberger, M. M., Hirshkowitz, M., et al. (2007). The visual scoring of sleep in adults. J. Clin. Sleep. Med. 3 (02), 121–131. doi:10.5664/jcsm.26814

PubMed Abstract | CrossRef Full Text | Google Scholar

Soufineyestani, M., Dowling, D., and Khan, A. (2020). Electroencephalography (EEG) technology applications and available devices. Appl. Sci. 10 (21), 7453. doi:10.3390/app10217453

CrossRef Full Text | Google Scholar

Sullivan, T. J., Deiss, S. R., and Cauwenberghs, G. (Editors) (2007). A low-noise, non-contact EEG/ECG sensor (IEEE Biomedical Circuits and Systems Conference; 2007: IEEE).

Google Scholar

Sunderland, T., Hampel, H., Takeda, M., Putnam, K. T., and Cohen, R. M. (2006). Biomarkers in the diagnosis of Alzheimer’s disease: Are we ready? J. Geriatr. Psychiatry Neurol. 19 (3), 172–179. doi:10.1177/0891988706291088

PubMed Abstract | CrossRef Full Text | Google Scholar

Tarnanas, I., Tsolaki, M., Nef, T., Müri, R. M., and Up, M. (2014). Can a novel computerized cognitive screening test provide additional information for early detection of Alzheimer's disease? Alzheimer's dementia 10 (6), 790–798. doi:10.1016/j.jalz.2014.01.002

CrossRef Full Text | Google Scholar

Tasnim, T., Das, A., Pathan, N. S., and Hossain, Q. D., editors. Sleep states classification based on single channel electrooculogram signal using variational mode decomposition. 2019 IEEE International Conference on Telecommunications and Photonics (ICTP); 2019: IEEE.

Google Scholar

Tawhid, M., Siuly, S., and Wang, H. (2020). Diagnosis of autism spectrum disorder from EEG using a time–frequency spectrogram image-based approach. Electron. Lett. 56 (25), 1372–1375. doi:10.1049/el.2020.2646

CrossRef Full Text | Google Scholar

Tieri, G., Morone, G., Paolucci, S., and Iosa, M. (2018). Virtual reality in cognitive and motor rehabilitation: Facts, fiction and fallacies. Expert Rev. Med. devices 15 (2), 107–117. doi:10.1080/17434440.2018.1425613

PubMed Abstract | CrossRef Full Text | Google Scholar

Tschanz, J., Welsh-Bohmer, K., Lyketsos, C., Corcoran, C., Green, R. C., Hayden, K., et al. (2006). Conversion to dementia from mild cognitive disorder: The Cache County Study. Cache Cty. Study 67 (2), 229–234. doi:10.1212/01.wnl.0000224748.48011.84

CrossRef Full Text | Google Scholar

Velasco-Garcia, M. N., and Mottram, T. (2003). Biosensor technology addressing agricultural problems. Biosyst. Eng. 84 (1), 1–12. doi:10.1016/s1537-5110(02)00236-2

CrossRef Full Text | Google Scholar

Vigneshvar, S., Sudhakumari, C. C., Senthilkumaran, B., and Prakash, H. (2016). Recent advances in biosensor technology for potential applications - an overview. Front. Bioeng. Biotechnol. 4, 11. doi:10.3389/fbioe.2016.00011

PubMed Abstract | CrossRef Full Text | Google Scholar

Voinescu, A., Petrini, K., Stanton Fraser, D., Lazarovicz, R-A., Papavă, I., Fodor, L. A., et al. (2023). The effectiveness of a virtual reality attention task to predict depression anxiety in comparison with current clinical measures. Virtual Real. 27 (1), 119–140. doi:10.1007/s10055-021-00520-7

CrossRef Full Text | Google Scholar

Wang, C., Liu, C., Shang, F., Niu, S., Ke, L., Zhang, N., et al. (2022). Tactile sensing technology in bionic skin: A review. Biosens. Bioelectron. 220, 114882. doi:10.1016/j.bios.2022.114882

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, J., Barstein, J., Ethridge, L. E., Mosconi, M. W., Takarae, Y., and Sweeney, J. (2013). Resting state EEG abnormalities in autism spectrum disorders. J. Neurodev. Disord. 5 (1), 24–14. doi:10.1186/1866-1955-5-24

PubMed Abstract | CrossRef Full Text | Google Scholar

Wechsler, D. (1945). Wechsler memory scale.

Google Scholar

Weiner MJTjon, health, aging (2009). Imaging and biomarkers will be used for detection and monitoring progression of early Alzheimer’s disease. s Dis. 13 (4), 332–333. doi:10.1007/s12603-009-0032-y

CrossRef Full Text | Google Scholar

Wong, M-L., and Licinio, J. J. N. R. N. (2001). Research and treatment approaches to depression. Nat. Rev. Neurosci. 2 (5), 343–351. doi:10.1038/35072566

PubMed Abstract | CrossRef Full Text | Google Scholar

Yamagata, B., Itahashi, T., Fujino, J., Ohta, H., Nakamura, M., Kato, N., et al. (2019). Machine learning approach to identify a resting-state functional connectivity pattern serving as an endophenotype of autism spectrum disorder. Brain Imaging Behav. 13 (6), 1689–1698. doi:10.1007/s11682-018-9973-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Yamaguchi, T., Foloppe, D. A., Richard, P., Richard, E., and Allain, P. (2012). A dual-modal virtual reality kitchen for (re) learning of everyday cooking activities in alzheimer's disease. Presence 21 (1), 43–57. doi:10.1162/pres_a_00080

CrossRef Full Text | Google Scholar

Yaneva, V., Ha, L. A., Eraslan, S., Yesilada, Y., and Mitkov, R. (2018). Detecting autism based on eye-tracking data from web searching tasks. Proceedings of the 15th International Web for All Conference. 1–10.

CrossRef Full Text | Google Scholar

Yang, D., Wang, H., Luo, S., Wang, C., Zhang, S., and Guo, S. (2019). Paper-cut flexible multifunctional electronics using MoS2 nanosheet. Nanomater. (Basel) 9 (7), 922. doi:10.3390/nano9070922

CrossRef Full Text | Google Scholar

Yen, H-Y., and Chiu, H-L. (2021). Virtual reality exergames for improving older adults’ cognition and depression: A systematic review and meta-analysis of randomized control trials. J. Am. Med. Dir. Assoc. 22 (5), 995–1002. doi:10.1016/j.jamda.2021.03.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Yeung, A. W. K., Tosevska, A., Klager, E., Eibensteiner, F., Laxar, D., Stoyanov, J., et al. (2021). Virtual and augmented reality applications in medicine: Analysis of the scientific literature. J. Med. internet Res. 23 (2), e25499. doi:10.2196/25499

PubMed Abstract | CrossRef Full Text | Google Scholar

Yildirim, O., Baloglu, U. B., Acharya, U., and Health, P. (2019). A deep learning model for automated sleep stages classification using PSG signals. Int. J. Environ. Res. Public Health 16 (4), 599. doi:10.3390/ijerph16040599

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, L. (Editor) (2019). EEG signals classification using machine learning for the identification and diagnosis of schizophrenia. 41st annual international conference of the ieee engineering in medicine and biology society (EMBC); 2019: IEEE).

Google Scholar

Zhang, S., Liu, C., Sun, X., and Huang, W. (2022). Current development of materials science and engineering towards epidermal sensors. Prog. Mater. Sci. 128, 100962. doi:10.1016/j.pmatsci.2022.100962

CrossRef Full Text | Google Scholar

Zhang, S., Xia, Q., Ma, S., Yang, W., Wang, Q., Yang, C., et al. (2021b). Current advances and challenges in nanosheet-based wearable power supply devices. iScience 24 (12), 103477. doi:10.1016/j.isci.2021.103477

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, S., Zeng, J., Wang, C., Feng, L., Song, Z., Zhao, W., et al. (2021a). The application of wearable glucose sensors in point-of-care testing. Front. Bioeng. Biotechnol. 9, 774210. doi:10.3389/fbioe.2021.774210

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, Y., Xu, H., Zhao, Y., Zhang, L., and Zhang, Y. J. N. R. (2021c). Application of the P300 potential in cognitive impairment assessments after transient ischemic attack or minor stroke. Neurol. Res. 43 (4), 336–341. doi:10.1080/01616412.2020.1866245

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhao, Z., Tang, H., Zhang, X., Qu, X., Hu, X., and Lu, J. (2021). Classification of children with autism and typical development using eye-tracking data from face-to-face conversations: Machine learning model development and performance evaluation. J. Med. Internet Res. 23 (8), e29328. doi:10.2196/29328

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhu, G., Wang, J., Xiao, L., Yang, K., Huang, K., Li, B., et al. (2021). Memory deficit in patients with temporal lobe epilepsy: Evidence from eye tracking technology. Front. Neurosci. 15, 716476. doi:10.3389/fnins.2021.716476

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhu, J., Wang, Z., Zeng, S., Li, X., Hu, B., Zhang, X., Editors, et al. (2019). “Toward depression recognition using EEG and eye tracking: An ensemble classification model CBEM,” IEEE international conference on bioinformatics and biomedicine (BIBM); 2019: Ieee.

CrossRef Full Text | Google Scholar

Keywords: biosensors, mental diseases, eye-tracking, EEG signals, EOG signals, virtual reality, diagnosis method

Citation: Zheng Y, Liu C, Lai NYG, Wang Q, Xia Q, Sun X and Zhang S (2023) Current development of biosensing technologies towards diagnosis of mental diseases. Front. Bioeng. Biotechnol. 11:1190211. doi: 10.3389/fbioe.2023.1190211

Received: 20 March 2023; Accepted: 16 June 2023;
Published: 29 June 2023.

Edited by:

Daniele Tosi, Nazarbayev University, Kazakhstan

Reviewed by:

William Serrano, University of South Florida, United States
Si Chen, Jiangsu University, China

Copyright © 2023 Zheng, Liu, Lai, Wang, Xia, Sun and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Xu Sun, Xu.Sun@nottingham.edu.cn; Sheng Zhang, szhang1984@zju.edu.cn

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.