<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2023.1172103</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>EEG-based analysis for pilots&#x2019; at-risk cognitive competency identification using RF-CNN algorithm</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes"><name>
<surname>Jiang</surname>
<given-names>Shaoqi</given-names>
</name><xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<xref rid="c001" ref-type="corresp"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1168745/overview"/>
</contrib>
<contrib contrib-type="author"><name>
<surname>Chen</surname>
<given-names>Weijiong</given-names>
</name><xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<xref rid="aff3" ref-type="aff"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author"><name>
<surname>Ren</surname>
<given-names>Zhenzhen</given-names>
</name><xref rid="aff3" ref-type="aff"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author"><name>
<surname>Zhu</surname>
<given-names>He</given-names>
</name><xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>College of Information Engineering, Jinhua Polytechnic</institution>, <addr-line>Jinhua, Zhejiang</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>College of Environment and Engineering, Shanghai Maritime University</institution>, <addr-line>Shanghai</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>College of Merchant Marine, Shanghai Maritime University</institution>, <addr-line>Shanghai</addr-line>, <country>China</country></aff>
<author-notes>
<fn id="fn0001" fn-type="edited-by">
<p>Edited by: Xiongkuo Min, Shanghai Jiao Tong University, China</p>
</fn>
<fn id="fn0002" fn-type="edited-by">
<p>Reviewed by: Grace Mary Kanaga E, Karunya Institute of Technology and Sciences, India; Harikumar R, Bannari Amman Institute of Technology (BIT), India; Yucheng Zhu, Shanghai Jiao Tong University, China</p>
</fn>
<corresp id="c001">&#x002A;Correspondence: Shaoqi Jiang, <email>sqjiang95@163.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>21</day>
<month>04</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>17</volume>
<elocation-id>1172103</elocation-id>
<history>
<date date-type="received">
<day>23</day>
<month>02</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>03</day>
<month>04</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2023 Jiang, Chen, Ren and Zhu.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Jiang, Chen, Ren and Zhu</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Cognitive competency is an essential complement to the existing ship pilot screening system that should be focused on. Situation awareness (SA), as the cognitive foundation of unsafe behaviors, is susceptible to influencing piloting performance. To address this issue, this paper develops an identification model based on random forest- convolutional neural network (RF-CNN) method for detecting at-risk cognitive competency (i.e., low SA level) using wearable EEG signal acquisition technology. In the poor visibility scene, the pilots&#x2019; SA levels were correlated with EEG frequency metrics in frontal (F) and central (C) regions, including &#x03B1;/&#x03B2; (<italic>p</italic>&#x2009;=&#x2009;0.071&#x2009;&#x003C;&#x2009;0.1 in F and <italic>p</italic>&#x2009;=&#x2009;0.042&#x2009;&#x003C;&#x2009;0.05 in C), &#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;) (<italic>p</italic>&#x2009;=&#x2009;0.048&#x2009;&#x003C;&#x2009;0.05 in F and <italic>p</italic>&#x2009;=&#x2009;0.026&#x2009;&#x003C;&#x2009;0.05 in C) and (&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2; (<italic>p</italic>&#x2009;=&#x2009;0.046&#x2009;&#x003C;&#x2009;0.05 in F and <italic>p</italic>&#x2009;=&#x2009;0.012&#x2009;&#x003C;&#x2009;0.05 in C), and then a total of 12 correlation features were obtained based on a 5&#x2009;s sliding time window. Using the RF algorithm developed by principal component analysis (PCA) for further feature combination, these salient combinations are used as input sets to obtain the CNN algorithm with optimal parameters for identification. The comparative results of the proposed RF-CNN (accuracy is 84.8%) against individual RF (accuracy is 78.1%) and CNN (accuracy is 81.6%) methods demonstrate that the RF-CNN with feature optimization provides the best identification of at-risk cognitive competency (accuracy increases 6.7%). Overall, the results of this paper provide key technical support for the development of an adaptive evaluation system of pilots&#x2019; cognitive competency based on intelligent technology, and lay the foundation and framework for monitoring the cognitive process and competency of ship piloting operation in China.</p>
</abstract>
<kwd-group>
<kwd>ship pilotage</kwd>
<kwd>cognitive competency</kwd>
<kwd>situation awareness</kwd>
<kwd>correlation evaluation</kwd>
<kwd>feature identification</kwd>
</kwd-group>
<counts>
<fig-count count="9"/>
<table-count count="6"/>
<equation-count count="21"/>
<ref-count count="51"/>
<page-count count="15"/>
<word-count count="10277"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Frontiers in Neuroscience</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="sec1" sec-type="intro">
<title>1. Introduction</title>
<p>Improvements to ship pilots&#x2019; situation awareness (SA) in maritime navigation are critical to reducing human errors, which have caused 75&#x2013;96% of marine accidents over the last few years (<xref ref-type="bibr" rid="ref15">Hetherington et al., 2006</xref>; <xref ref-type="bibr" rid="ref30">Mohammadfam et al., 2019</xref>). In recent years, growth in traffic densities, ship speeds, and ship sizes have led to the need to improve pilots&#x2019; operational safety (<xref ref-type="bibr" rid="ref45">Weng and Yang, 2015</xref>). Due to the complexity of marine systems and the increased use of automation and intelligence, it is becoming more difficult for pilots to fully percept and understand the current environmental state and predict changes in near future (<xref ref-type="bibr" rid="ref46">Wild, 2011</xref>; <xref ref-type="bibr" rid="ref43">Stirling et al., 2019</xref>). To prevent pilots&#x2019; unsafe behaviors in a more effective and practical manner, it is indispensable to identify at-risk cognitive competency combined with emergency situations in pilotage (<xref ref-type="bibr" rid="ref31">Mosier et al., 2013</xref>). However, the measurement gap is increased by the requirement of pilots, as the task executors, to maintain high SA levels, as well as the individual differences and experiential properties of pilotage (<xref ref-type="bibr" rid="ref7">Darbra et al., 2007</xref>). Therefore, identifying at-risk cognitive competency (i.e., low SA levels) as a means of preventing human errors is complicated, and is worthy of further investigation in operational situations.</p>
<p>At present, the collision of poor visibility situations continues to account for a major proportion of all accidents resulting in casualties, and studies are focused on determining their relevance through statistical methods (<xref ref-type="bibr" rid="ref3">Antao and Soares, 2019</xref>). <xref ref-type="bibr" rid="ref6">Chauvin et al. (2013)</xref> verified that collision accidents were significantly affected by visibility, which is a variable in 56.51% of environmental factors in the human factors analysis and classification system (HFACS). <xref ref-type="bibr" rid="ref5">Bye and Aalberg (2018)</xref> supported this finding by calculating the ability of different visibility conditions to explain accidents, and concluded that the most significant effect occurs when the visibility drops to less than 0.25 nautical miles. Existing research has effectively evaluated whether poor visibility conditions are more likely to cause accidents than other environmental factors (e.g., water depth, flow rate, etc.), but has not assessed changes in the cognitive state of the pilots in preparation for potential hazards in poor visibility (<xref ref-type="bibr" rid="ref10">Endsley, 2015</xref>). In this study, the poor visibility was selected as a typical situation, in which the identification of pilots&#x2019; low SA levels is necessary. According to the literature, the SA three-level hierarchical structure is one of the most popular conceptual cognitive frameworks, which frames SA into three stages: perception, comprehension, and prediction (<xref ref-type="bibr" rid="ref26">Lo et al., 2016</xref>). Therefore, the first problem to solve is how to accurately identify pilots&#x2019; low SA level with the three-level cognitive frameworks, taking poor visibility scenes as an example (<xref ref-type="bibr" rid="ref50">Zhang et al., 2020</xref>).</p>
<p>According to the theoretical interpretation of the SA three-level model, eye movements and EEG parameters among physiological indicators are the most effective reflection of an individual&#x2019;s intrinsic state from the cognitive processes of perception, understanding and prediction, providing the possibility to essentially prevent unsafe behaviors by detecting SA levels dynamically (<xref ref-type="bibr" rid="ref50">Zhang et al., 2020</xref>; <xref ref-type="bibr" rid="ref35">Pei et al., 2022</xref>). The correlation between pilots&#x2019; eye-tracking metrics and SA levels has been validated in a previous study (<xref ref-type="bibr" rid="ref19">Jiang et al., 2021</xref>). However, eye-tracking metrics are only acquired through tangible visual behaviors, and there are no consistent conclusions on whether such visual behaviors truly meet the perceptual requirements in the SA model, or whether the information gazed at is effectively understood by the pilot and the actual behavior is taken (<xref ref-type="bibr" rid="ref8">di Flumeri et al., 2018</xref>). Therefore, it is necessary to conduct research based on the application of EEG acquisition technology to further assist in quantifying the cognitive processes of unsafe behaviors, and the use of pilots&#x2019; EEG features for SA identification is the first problem that needs to be addressed.</p>
<p>To ensure consistency in the study of EEG signals in cross-domain applications, the brain is usually divided into different electrode sites based on five regions [frontal (F), central (C), parietal (P), temporal (T), and occipital (O)] (<xref ref-type="bibr" rid="ref2">Aggarwal and Chugh, 2022</xref>). EEG is an active map of the brain displayed by converting EEG data to the time-frequency by appropriate methods such as Fourier transform (FT) (<xref ref-type="bibr" rid="ref42">Srinivasan et al., 2019</xref>). In time-frequency analysis, information about the cognitive state of an individual is usually obtained based on the data in different frequency bands, commonly including delta (&#x03B4;) (1&#x2013;4&#x2009;Hz), theta (&#x03B8;) (5&#x2013;8&#x2009;Hz), alpha (&#x03B1;) (9&#x2013;14&#x2009;Hz), beta (&#x03B2;) (15&#x2013;30&#x2009;Hz), and gamma (&#x03B3;) (31&#x2013;59&#x2009;Hz) (<xref ref-type="bibr" rid="ref33">Pan et al., 2021</xref>). Therefore, it has become a common trend to analyze frequency features for the performance evaluation in EEG-based studies (<xref ref-type="bibr" rid="ref13">Guti&#x00E9;rrez and Ram&#x00ED;rez-Moreno, 2016</xref>). <xref ref-type="bibr" rid="ref37">Puma et al. (2018)</xref> explored the effect of task difficulty on the &#x03B8; and &#x03B1; bands by varying the number of sub-tasks and showed that the higher the level of the operator&#x2019;s behavioral performance, the lower the power spectral density (PSD) of the &#x03B8; and &#x03B1; bands. <xref ref-type="bibr" rid="ref9">Dimitriadis et al. (2010)</xref> found that the &#x03B4; band was correlated with the difficulty of the operator&#x2019;s perceptual task, i.e., the PSD of the &#x03B4; band increased when the operator perceived the experimental task as too easy. However, previous studies not fully utilizing EEG time-frequency features for cognitive studies.</p>
<p>No common agreement has been made for the EEG features correlated with SA as EEG acquisition techniques were applied in different tasks and workplaces. Regarding the relationship between time-frequency features and SA, there have been attempts to validate the association (<xref ref-type="bibr" rid="ref23">Klaproth et al., 2020</xref>; <xref ref-type="bibr" rid="ref18">Iqbal et al., 2021</xref>). <xref ref-type="bibr" rid="ref21">K&#x00E4;stle et al. (2021)</xref> identified the correlations between &#x03B2; and &#x03B3; in parietal and temporal regions and SA levels during a control task. Notably, only a few studies (<xref ref-type="bibr" rid="ref41">Saini et al., 2020</xref>) have adopted multiple EEG spatial-frequency features, but there is no consensus on which features are associated with SA due to different task conditions and application purposes, let alone the identification of pilots&#x2019; SA levels based on EEG association metrics.</p>
<p>Therefore, the aims in this paper are twofold: First, the primary goal is to assess the relationships between EEG time-frequency features and SA levels. Correlation analysis indicated that &#x03B1;/&#x03B2;, &#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;), and (&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2; frequency bands are significantly correlated with SA levels. The ultimate intention of this work is to explore the SA identification method based on associated EEG metrics, which is expected to reduce piloting risks through improved pilot selection and training.</p>
</sec>
<sec id="sec2">
<title>2. Experiments and methods</title>
<p>The research framework was constructed to identify pilots&#x2019; SA levels using EEG features after the EEG acquisition experiment in bridge simulator was implemented (<xref rid="fig1" ref-type="fig">Figure 1</xref>). Firstly, it was assumed that EEG features and SA were significantly correlated in specific scenes (including poor visibility situation). Consequently, based on SART questionnaire and EEG acquisition technology, the different SA level groups and EEG metrics were separately defined as independent and dependent variables. To ensure the professionalism of questionnaires and measurement accuracy, the SART questionnaires were confirmed through safety engineering and management, maritime supervision, and senior pilot experts to adjust existing measurement items (<xref ref-type="bibr" rid="ref19">Jiang et al., 2021</xref>).</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Identification framework of SA levels in the maritime pilotage simulations.</p>
</caption>
<graphic xlink:href="fnins-17-1172103-g001.tif"/>
</fig>
<p>Secondly, EEG acquisition technology was used to capture pilots&#x2019; brain waves, including the &#x03B8;/&#x03B2;, &#x03B1;/&#x03B2;, &#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;), (&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2;, and (&#x03B1;&#x2009;+&#x2009;&#x03B8;)/(&#x03B1;&#x2009;+&#x2009;&#x03B2;) in different brain regions (e.g., F and C), which were used to indirectly infer SA-related constructs (e.g., understanding and prediction). And then the visualization methods (i.e., brain regions activity and time-frequency analysis) are illustrated to preliminarily understand the common EEG patterns and differences between the various SA groups. Although visualization techniques allow analyzing EEG data in an explorative way, a statistical analysis must be performed to determine differences in spatial-frequency analysis as caused by variations in SA levels. Quantitatively, EEG frequency metrics are calculated for pilots in each SA group (high and low) within different regions. The average power-based values of EEG metrics among pilots with high and low SA levels were input into the permutation simulation to compare their associations. After digital and time-domain filtering, the FT-based EEG frequency features were divided into training and testing sets for input to the RF-CNN (Random forest-convolutional neural network) method, which provides the new avenue of classifying the SA for pilots&#x2019; screening.</p>
<sec id="sec3">
<title>2.1. Experiments</title>
<sec id="sec4">
<title>2.1.1. Participants</title>
<p>Twenty-five male pilots with normal vision from different pilotages were recruited as experimental participants, who were taking the qualification examination for pilots&#x2019; competency in navigation simulators. All pilots were aged from 30 to 45 years old, with an average of 11.3&#x2009;years pilotage experience. Pilots have provided informed consent about experimental procedures that were approved by the relevant authority.</p>
</sec>
<sec id="sec5">
<title>2.1.2. Situations</title>
<p>The trajectory from Shanghai Waigaoqiao terminal phase 5 to west Hengsha anchorage was selected from the database of qualification examinations, as shown in the <xref rid="fig2" ref-type="fig">Figure 2A</xref>. The course and distance of each section of the route from the port to the anchorage are marked according to the industry standard, and the scenes such as ship departure (initial course 18.4 degrees, distance 0.37&#x2009;nm) and navigation in the fairway (initial course 121.3 degrees, distance 2.40&#x2009;nm) are included. Poor visibility was included as a mandatory assessment element in the pilotage experiments, as shown in <xref rid="fig2" ref-type="fig">Figure 2B</xref>. In addition, the initial conditions included that the type of vessel was uniformly set as 5000TEU container ship, speed of vessel 0 knots, flood tide 1 to 2 knots, and north wind force 3.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Voyage plan <bold>(A)</bold> and ship pilotage with poor visibility situations in the ship pilotage simulations <bold>(B)</bold>.</p>
</caption>
<graphic xlink:href="fnins-17-1172103-g002.tif"/>
</fig>
</sec>
<sec id="sec6">
<title>2.1.3. Procedure</title>
<p>In this experiment, 25 exams were scheduled for 5&#x2009;days, each of which tested one participant who acting as a pilot in a three-person exam group. Using the SART questionnaire and EEG acquisition, the experiment was divided into two parts, SA measurement and experiment. The procedure was as follows:<list list-type="order">
<list-item>
<p>In the examination section, the EEG acquisition device (Semi-dry Bitbrain EEG) was calibrated before the experiment. The pilots then used the bridge simulator to sail along a preset navigation route and pass through situations, whereby the pilots conducted at least 40&#x2009;min of ship pilotage tasks. During this time, a wearable wireless EEG device was used to collect the pilots&#x2019; brain waves.</p>
</list-item>
<list-item>
<p>The democracy survey was used to pre-judge the pilots&#x2019; potential SA level before the experiment. Interviews for the SART questionnaire were implemented to confirm the SA levels in the post-test.</p>
</list-item>
</list></p>
</sec>
</sec>
<sec id="sec7">
<title>2.2. Methods</title>
<sec id="sec8">
<title>2.2.1. Data analysis</title>
<p>The extraction of EEG features involved identifying and calculating the raw data that depend on different SA levels. EEG records neuronal firing in different brain regions by arranging electrodes at corresponding locations in the cerebral cortex, in order to create a dynamic data curve over time. It can be used to reflect the brain activity during a specified time-period and can be classified as spontaneous or evoked EEG signals according to the principle of signal generation (<xref ref-type="bibr" rid="ref22">Kaur et al., 2022</xref>; <xref ref-type="bibr" rid="ref38">Quan et al., 2022</xref>). To investigate the cognitive processes of the pilots in a continuous task, rather than the cognitive state at a certain moment, the study of spontaneous rhythmic brain waves was carried out. First, the distribution of the initial EEG data in the temporal and spatial dimensions (including the F, C, P, and O regions) of the pilots in different SA groups was collected and taken for a 50-s period for preliminary analysis, as shown in <xref rid="fig3" ref-type="fig">Figure 3</xref>. Where, the voltage value displayed in each region is the average value of the corresponding EEG potential within that region. The distribution of the data revealed that the EEG data of the pilots in the high SA group fluctuated more gently, indicating that they had a better control of the situations during the piloting processes. And the variability of different SA groups in F region was relatively obvious, suggesting that the EEG features in this region may help to distinguish the pilots&#x2019; high or low cognitive level. <xref rid="fig3" ref-type="fig">Figure 3</xref> shows the EEG data over a 50-s period, but in practice the device is recording at a sampling frequency of 256&#x2009;Hz and the total number of samples collected is approximately 1,847,820. In addition, EEG has normal interruptions in capturing brain activity, including other activities from non-neural sources when capturing brain activity, such as ocular artifacts, cardiac artifacts, myoelectric artifacts, and work-frequency interference (<xref ref-type="bibr" rid="ref48">Yang et al., 2020</xref>), and voltage values above 200 are considered signal artifacts as in <xref rid="fig3" ref-type="fig">Figure 3</xref>.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Fragment of the EEG raw data in different SA groups.</p>
</caption>
<graphic xlink:href="fnins-17-1172103-g003.tif"/>
</fig>
<p>Subsequently, EEG preprocessing is performed by feature extraction to reduce data complexity and remove artifacts. Due to the random, nonlinear, and multi-band characteristics of EEG signals, digital filters are commonly utilized to filter signals from frequency-domain, but they can only filter out the 50&#x2009;Hz frequency interference in practice. Therefore, indirect processing is required by effective feature extraction methods, including common spatial patterns (CSP), wavelet transform (WT), principal component analysis (PCA), independent component analysis (ICA), autoregressive analysis (AR), and FT. To reduce the data dimensions for effective time-frequency analysis, the FT-based PSD was selected for data processing and feature extraction, and power-based EEG data analysis has been applied in safety critical areas (<xref ref-type="bibr" rid="ref44">Wang et al., 2015</xref>; <xref ref-type="bibr" rid="ref36">Perez-Valero et al., 2021</xref>). Specifically, the EEG signals need to be first transformed into the frequency-domain by FT, and then the PSD features are calculated as shown in the following equation:</p>
<disp-formula id="EQ1">
<label>(1)</label>
<mml:math id="M1">
<mml:mrow>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>w</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:munderover>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>exp</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>j</mml:mi>
<mml:mn>2</mml:mn>
<mml:mi>&#x03C0;</mml:mi>
<mml:mi>w</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mspace width="thickmathspace"/>
<mml:mi>w</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>N</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M2">
<mml:mrow>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>w</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the EEG signal after Fourier variation and <italic>N</italic> is the total number of samples acquisition during the specified time-period.</p>
<disp-formula id="EQ2">
<label>(2)</label>
<mml:math id="M3">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mfenced>
<mml:mi>w</mml:mi>
</mml:mfenced>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mfenced>
<mml:mrow>
<mml:mi>&#x0394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>&#x0394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mfenced close="|" open="|">
<mml:mrow>
<mml:mi>X</mml:mi>
<mml:mfenced>
<mml:mi>w</mml:mi>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi>&#x0394;</mml:mi>
<mml:mi>t</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M4">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>w</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the feature variable of PSD, and <inline-formula>
<mml:math id="M5">
<mml:mrow>
<mml:mi>&#x0394;</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M6">
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mi>s</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> denote the time intervals and acquisition frequencies of the samples, respectively.</p>
</sec>
<sec id="sec9">
<title>2.2.2. RF-CNN methods</title>
<p>In the simulation experiments described above, real-time, synchronous EEG data were first collected. The multi-dimension of the EEG signals and its susceptibility to non-cerebral neural activity, which contains a large number of artifacts, make direct analysis of the data difficult (<xref ref-type="bibr" rid="ref39">Rabcan et al., 2022</xref>). To classify pilots&#x2019; SA level using wearable EEG acquisition technology, an RF-CNN method including data input, RF, modified CNN, and verification modules was developed. RF, as an ensemble learning method, carries out voting integration based on the prediction results of decision tree classifiers, and has good accuracy and strong robustness in the identification of noise and outliers (<xref ref-type="bibr" rid="ref28">McDonald et al., 2020</xref>). CNN algorithm is one of the typical examples of deep learning algorithms, which tend to mathematically represent complex and multi-dimensional classification problems and achieve accurate and fast identification of targets based on good network generalization ability (<xref ref-type="bibr" rid="ref51">Zhu et al., 2021</xref>; <xref ref-type="bibr" rid="ref11">Fan et al., 2022</xref>).</p>
<p>Currently, CNN algorithms have been used in direct combination with EEG images, but the identification accuracy is only about 50%. For example, <xref ref-type="bibr" rid="ref20">Jie et al. (2014)</xref> used CNN alone for image identification of EEG signals and obtained an accuracy of 45%. Based on this, some researchers considered feature extraction of the preprocessed EEG signal and then combined with CNN to carry out the identification of EEG features and achieved better results. For example, <xref ref-type="bibr" rid="ref24">Li M. A. et al. (2021)</xref> tried to extract the WT features of EEG signals for CNN algorithm training, and the results showed that the final recognition accuracy reached more than 85%. Overall, the use of EEG signal features for RF and CNN model training has achieved good results in safety-related fields and has become an important research trend (<xref ref-type="bibr" rid="ref17">Ieracitano et al., 2019</xref>; <xref ref-type="bibr" rid="ref1">Admiraal et al., 2021</xref>). As EEG data involve multiple feature dimensions and are obviously correlated, commonly used machine learning algorithms fail to effectively reduce signal redundancy and relatively lack the ability to identify the internal mechanisms of EEG features. The RF-PCA algorithm is used to rank the importance of the initial EEG data and extract the main components to optimize the features of the input CNN network structure, i.e., to further ensure the overall identification rate of the RF-CNN algorithm by reducing the feature dimensions of the initial EEG data, and to verify the model performance of the optimized method in SA level identification by comparing it with the traditional methods.</p>
<sec id="sec10">
<title>2.2.2.1. RF module</title>
<p>As a machine learning method, RFs are effective tools for evaluation and classification (<xref ref-type="bibr" rid="ref4">Scornet et al., 2015</xref>). The bootstrap sampling technique was used to extract training subsets from the samples that had been preprocessed by digital and time-domain filtering. Decision tree modeling was carried out for each subset (S<sub>1</sub>, S<sub>2</sub>, &#x22EF;, S<sub>k</sub>), and then the classification results were determined by voting according to the principle of majority rule. The objective of the RF was to generate a decision tree dependent on a random variable &#x03B8; on the basis of data sample X and identification variable Y. Assuming that the identification result of a single decision tree classifier <inline-formula>
<mml:math id="M7">
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:msub>
<mml:mi>&#x03B8;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is<inline-formula>
<mml:math id="M8">
<mml:mrow>
<mml:mspace width="thickmathspace"/>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>X</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, the final identification result of the model can be expressed as<inline-formula>
<mml:math id="M9">
<mml:mrow>
<mml:mspace width="thickmathspace"/>
<mml:mi>H</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>X</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>k</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>k</mml:mi>
</mml:munderover>
<mml:msub>
<mml:mi>h</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>X</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>The feature importance of the RF was determined by adding noise to a certain feature and considering whether the identification accuracy dropped significantly (<xref ref-type="bibr" rid="ref12">Genuer et al., 2010</xref>). In the calculation process, the residual mean square of the out-of-bag (OOB) score was used to evaluate the importance of characteristic variables. This can be expressed as:</p>
<disp-formula id="EQ3">
<label>(3)</label>
<mml:math id="M10">
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>p</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>e</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>O</mml:mi>
<mml:mi>O</mml:mi>
<mml:mi>B</mml:mi>
<mml:mn>2</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>O</mml:mi>
<mml:mi>O</mml:mi>
<mml:mi>B</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>errOOB1</italic> and <italic>errOOB2</italic> are the OOB identification errors in each decision tree before and after adding noise interference to all sample features, respectively. To determine the optimal parameters, the grid search method was used to ensure model stability. The root mean square error (RMSE) was adopted to prevent the overfitting of feature data. This is calculated as follows:</p>
<disp-formula id="EQ4">
<label>(4)</label>
<mml:math id="M11">
<mml:mrow>
<mml:mi>R</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>=</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x00AF;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M12">
<mml:mrow>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>b</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>and <inline-formula>
<mml:math id="M13">
<mml:mrow>
<mml:msup>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>&#x00AF;</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> are the observed and predicted values of the corresponding samples, respectively, and <italic>n</italic> is the number of samples. Moreover, because the correlation of initial features might tend to produce partial overlaps of information, PCA was applied to obtain the optimal feature combination:</p>
<disp-formula id="EQ5">
<label>(5)</label>
<mml:math id="M14">
<mml:mrow>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mn>12</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mo>&#x22EF;</mml:mo>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mn>21</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mn>22</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mo>&#x22EF;</mml:mo>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mo>&#x22EE;</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>z</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>z</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mo>&#x22EF;</mml:mo>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>z</mml:mi>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mi>z</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M15">
<mml:mrow>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow></mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow></mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:mo>&#x22EF;</mml:mo>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow></mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula><italic>, i&#x2009;=&#x2009;1,2,</italic>&#x22EF;<italic>, z</italic>. No two principal components are related, i.e., <inline-formula>
<mml:math id="M16">
<mml:mrow>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2260;</mml:mo>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mspace width="thickmathspace"/>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2260;</mml:mo>
<mml:mi>j</mml:mi>
<mml:mi mathvariant="normal">;,,,,</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi mathvariant="normal">;,,,,</mml:mi>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mi mathvariant="normal">;,,,,</mml:mi>
<mml:mn>2</mml:mn>
<mml:mi mathvariant="normal">;,,,,</mml:mi>
<mml:mo>&#x22EF;</mml:mo>
<mml:mi mathvariant="normal">;,,,,</mml:mi>
<mml:mi>z</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. The first principal component F<sub>1</sub> is determined as being the most different from all linear combinations of the initial importance sequence (X<sub>1</sub>, X<sub>2</sub>, &#x22EF;, X<sub>z</sub>). The second principal component F<sub>2</sub> is determined as being a linear combination of X<sub>1</sub>-X<sub>z</sub> that is not related to F<sub>1</sub> and gives the next-greatest difference from the initial sequence. Similarly, the remaining principal components were sought as new input sets for the CNN.</p>
</sec>
<sec id="sec11">
<title>2.2.2.2. CNN modified module</title>
<p>As a deep learning method, CNNs are often used to deal with binary classification problems with multi-dimension samples (<xref ref-type="bibr" rid="ref14">Hersche et al., 2020</xref>). Essentially, the CNN algorithm mainly uses convolution, pooling and fully connected networks to alternately extraction, down-dimensioning, and fusion of features from multidimensional EEG data to achieve SA-level identification. The feature fusions for the CNN solved the problem of EEG data filtering and extracting in the identification framework through five structures: input layer, convolutional layer, pooling layer, fully connected layer, and output layer (<xref ref-type="bibr" rid="ref25">Li F. H. et al., 2021</xref>; <xref ref-type="bibr" rid="ref27">Luo et al., 2021</xref>). The CNN model with the PCA algorithm was used to transfer the importance rankings and improve convergence efficiency.</p>
<sec id="sec12">
<title>2.2.2.2.1. Structure 1</title>
<p>Input layer: The input feature size preset to 13&#x2009;&#x00D7;&#x2009;20.</p>
</sec>
<sec id="sec13">
<title>2.2.2.2.2. Structure 2</title>
<p>Convolutional layer: The number of layers depends on the training parameters and convergence speed. In the parameters, the convolution kernel size, i.e., the filtering weights, is usually set to 3&#x2009;&#x00D7;&#x2009;3 or 5&#x2009;&#x00D7;&#x2009;5 matrices, calculated as follows:</p>
<disp-formula id="EQ6">
<label>(6)</label>
<mml:math id="M17">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:munder>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mi>x</mml:mi>
</mml:munder>
<mml:munder>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mi>y</mml:mi>
</mml:munder>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x00D7;</mml:mo>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mi>b</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M18">
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> denotes the filter weights, <italic>b</italic> is filter bias term, and <italic>f</italic> is activation function.</p>
</sec>
<sec id="sec14">
<title>2.2.2.2.3. Structure 3</title>
<p>Pooling layer: After each convolution process gets the information of different features, layer-by-layer filtering of the convolved features can be achieved based on the pooling processes. The principle of follows closely the forward learning process of the convolutional layer, i.e., the unit matrix corresponding to each feature matrix is computed and normalized based on the movement of the filter from the top-left to the bottom-right corner of the current network (<xref ref-type="bibr" rid="ref16">Hu et al., 2019</xref>). The pooling layer is used to optimize the parameters of the convolved processes, and the activation functions of the pooling processes, i.e., the nonlinear ReLU, also contributes to reduce the interdependence between the convolution parameters by network sparse matrices, calculated as follows:</p>
<disp-formula id="EQ7">
<label>(7)</label>
<mml:math id="M19">
<mml:mrow>
<mml:msup>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mi>max</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mi>x</mml:mi>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mi>x</mml:mi>
<mml:mo>&#x003E;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mi>x</mml:mi>
<mml:mo>&#x2264;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <inline-formula>
<mml:math id="M20">
<mml:mrow>
<mml:msup>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> is the output of the activation function and <inline-formula>
<mml:math id="M21">
<mml:mrow>
<mml:msup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is the multiplier of the weights and corresponding values of the neurons in the <inline-formula>
<mml:math id="M22">
<mml:mi>j</mml:mi>
</mml:math>
</inline-formula><italic>th</italic> layer. And the maximum pooling process was selected for model training with the following equation:</p>
<disp-formula id="EQ8">
<label>(8)</label>
<mml:math id="M23">
<mml:mrow>
<mml:mi>g</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>max</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>a</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
<sec id="sec112">
<title>2.2.2.2.4. Structure 4</title>
<p>Structure 4. Fully connected layer, where each node is connected to all nodes in the previous layers, and is used to integrate all the features extracted during the convolving and pooling processes. The Softmax is the final output function, which can transform the original output of the neural network into a probability distribution and obtain the recognition probability of the corresponding category. Suppose the original output of the neural network is <inline-formula>
<mml:math id="M24">
<mml:mrow>
<mml:mspace width="thickmathspace"/>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, then the Softmax regression process can be expressed as:</p>
<disp-formula id="EQ9">
<label>(9)</label>
<mml:math id="M25">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi>t</mml:mi>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:msubsup>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>To identify the speed of convergence, the cross-entropy validation method is used to describe the distance between the two probability distributions of the output. If the probability distributions of <italic>p</italic> and <italic>q</italic> are given, the formula for their cross-entropy can be expressed as follows:</p>
<disp-formula id="EQ10">
<label>(10)</label>
<mml:math id="M26">
<mml:mrow>
<mml:mi>H</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mi>q</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mo>&#x2212;</mml:mo>
<mml:munder>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mi>x</mml:mi>
</mml:munder>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>log</mml:mi>
<mml:mi>q</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>For balancing the training time and accuracy of the network, a gradient descent method with error back propagation is also used, i.e., each training is based on a fixed number of samples from the training set only requires parameter updates in opposite direction of the gradient. Assuming <inline-formula>
<mml:math id="M27">
<mml:mrow>
<mml:mi>J</mml:mi>
<mml:mspace width="thickmathspace"/>
</mml:mrow>
</mml:math>
</inline-formula>is cost function, the iterative process for each <inline-formula>
<mml:math id="M28">
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is:</p>
<disp-formula id="EQ12">
<label>(11)</label>
<mml:math id="M29">
<mml:mrow>
<mml:msubsup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03B5;</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:mi>J</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>+</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:msubsup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>&#x03B5;</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:mi>J</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Where <inline-formula>
<mml:math id="M31">
<mml:mi>&#x03B5;</mml:mi>
</mml:math>
</inline-formula> is learning rate, <inline-formula>
<mml:math id="M32">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M33">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:mi>J</mml:mi>
<mml:mo>/</mml:mo>
<mml:msubsup>
<mml:mi>b</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mi>l</mml:mi>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> are the partial derivatives of errors.</p>
<p>In addition, based on the CNN principle, it is found that each convolution process generates different information, and the convolution layer shows more spatial and frequency features than the fully connected layer. In the feature filtering processes of convolution and pooling, some important features that affect the results are inevitably filtered to ensure the objective requirement of convergence efficiency, which makes the identification performance degraded. Therefore, the PCA method is introduced to improve the CNN structures by reducing the dimensionality of the features after each convolution process, and fusing this reduced data with the pooling process point by point before inputting it to the following convolution layer until the end of training. This CNN modified module not only helps to improve the convergence efficiency, but also integrate important information based on the correlation mechanism to maximize the reflection of EEG time-frequency features in the identification results.</p>
</sec>
</sec>
<sec id="sec15">
<title>2.2.2.3. Verification module</title>
<p>For the test set containing samples of unknown categories, the confusion matrix of the model output included true positive (TP), false positive (FP), true negative (TN), and false negative (FN) results. To evaluate the identification performance of the RF-CNN method, RF and CNN methods without optimized feature combinations were selected for comparative analysis. The performance evaluation criteria for each classifier were the general accuracy (ACC), true positive rate (TPR), and true negative rate (TNR). These metrics were calculated as follows:</p>
<disp-formula id="EQ13">
<label>(12)</label>
<mml:math id="M34">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>C</mml:mi>
<mml:mi>C</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>T</mml:mi>
<mml:mi>N</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>F</mml:mi>
<mml:mi>P</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>F</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ14">
<label>(13)</label>
<mml:math id="M35">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
<mml:mi>R</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>F</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ15">
<label>(14)</label>
<mml:math id="M36">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>R</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>N</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>F</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x00D7;</mml:mo>
<mml:mn>100</mml:mn>
<mml:mi>%</mml:mi>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where TP refers to samples with an observed value of 1 and a predicted value of 1, FP refers to samples with an observed value of 0 and a predicted value of 1, TN refers to samples with an observed value of 1 and a predicted value of 0, and FN refers to samples with an observed value of 0 and a predicted value of 0.</p>
</sec>
</sec>
</sec>
</sec>
<sec id="sec16" sec-type="results">
<title>3. Results</title>
<p>For the purpose of constructing the SA identification model using EEG frequency metrics in different brain regions, groups with different SA levels were first established. To facilitate the analysis, the research hypotheses divide pilots&#x2019; SA into two levels based on their SART score: high (above average SART score) and low (below average SART score). According to the SART scores (mean&#x2009;=&#x2009;20.13, standard deviation&#x2009;=&#x2009;5.83), the pilots were divided into a high-SA group containing 13 participants (mean&#x2009;=&#x2009;24.5, standard deviation&#x2009;=&#x2009;5.13) and a low-SA group containing 12 participants (mean&#x2009;=&#x2009;15.2, standard deviation&#x2009;=&#x2009;4.37).</p>
<sec id="sec17">
<title>3.1. Time-frequency analysis</title>
<p>To visually compare the EEG pattern of different individuals in the two SA groups, the EEG features are illustrated using time-frequency analysis in the poor visibility situations of the examination. Since the variations in the features of a single channel are most likely the superimposed effects of stimuli in adjacent regions, it is necessary to analyze the time-frequency features combining with spatial-domain (i.e., different brain regions). The spatial-domain represents the dynamic distribution of features in different brain regions over time. Therefore, the PSDs of different brain regions during a time-period was selected, and its interpretation of brain activity by describing the power distribution of different signals in frequency, the results are shown in <xref rid="fig4" ref-type="fig">Figure 4</xref>. The PSDs data were higher in the low- than high-SA group, indicating that PSDs may initially reflect brain activity, i.e., brain activity of pilots in the low-SA group is more susceptible to external stimulus. The PSD features in F and C regions had more obvious differences between SA groups, suggesting that identifying different SA levels by features in these two brain regions might have better results.</p>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Fragment of the PSDs and brain activity in different SA groups.</p>
</caption>
<graphic xlink:href="fnins-17-1172103-g004.tif"/>
</fig>
<p>To further investigate the correlation between SA levels and brain regions activity in specific frequency bands, the average power of &#x03B4;, &#x03B8;, &#x03B1;, &#x03B2;, and &#x03B3; were extracted for activity analysis, and the results are shown in <xref rid="fig4" ref-type="fig">Figure 4</xref>. It is shown that the variability of the pilots&#x2019; brain activity is mainly concentrated in the &#x03B8;, &#x03B1;, and &#x03B2; frequency bands, while the &#x03B4; and &#x03B3; frequency bands did not vary significantly due to their too-low or too-high frequency ranges, respectively. Moreover, the difference in brain activity between the high- and low-SA groups is more apparent in the F, C, P and O regions, including the C and O regions in the &#x03B8; band, the C and P regions in the &#x03B1; band, and the F and P regions in the &#x03B2; band. These findings of brain activity analysis provide a preliminary reference and understanding for screening of brain regions in the identification model.</p>
<p>Time-frequency analysis provides joint information in the frequency- and time-domain to illustrate the time-varying frequency. For understanding the variability of EEG time-frequency features at high- or low-SA levels, a time-period in which poor visibility was selected for feature analysis to obtain the EEG frequency distribution over time in brain regions, as shown in <xref rid="fig5" ref-type="fig">Figure 5</xref>. The results show that there are significant differences in the time-frequency features of different SA groups, with the variability in the F and C regions mainly in the high frequency, while the P and O regions are more obvious in the low frequency. As only short time-periods of EEG features were selected for analysis, further statistical tests on the complete task processes are needed. However, there was consistency between the results of EEG time-frequency features and brain area activity, e.g., the finding that the high SA group was relatively inactive in high frequencies in P and O regions in time-frequency was also reflected in brain activity analysis, suggesting that EEG states can be cross-referenced from different feature dimensions. Therefore, the brain region activity and time-frequency analysis in F, C, P and O regions consistently showed relatively significant differences in EEG signals between high- and low-SA groups in &#x03B8;, &#x03B1; and &#x03B2; frequencies. The visualization outputs and analyses of these recorded EEG features help interpret the results qualitatively to better understand the quantitative SA identification results.</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Fragment of the time-frequency in different SA groups.</p>
</caption>
<graphic xlink:href="fnins-17-1172103-g005.tif"/>
</fig>
</sec>
<sec id="sec18">
<title>3.2. Correlation evaluation</title>
<p>To determine whether these differences are statistically significant, the averages power of the EEG frequency metrics from each SA group are compared across the brain regions using the permutation simulation technique. As before, the &#x03B8;, &#x03B1; and &#x03B2; frequencies in F, C, P, and O regions are selected for statistical analysis as they are the primary objects with differences in brain activity between high- and low-SA groups in the visualized time-frequency analyses. The statistical differences between the EEG metrics and SA levels in the four regions across the poor visibility situations are calculated in the permutation simulations. Thus, descriptive statistics of five commonly used frequency combination metrics [&#x03B8;/&#x03B2;, &#x03B1;/&#x03B2;, &#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;), (&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2;, and (&#x03B1;&#x2009;+&#x2009;&#x03B8;)/(&#x03B1;&#x2009;+&#x2009;&#x03B2;)] related to cognitive function for the two SA groups and the results of the statistical tests are summarized in <xref rid="tab1" ref-type="table">Table 1</xref>.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Correlation results in poor visibility situations.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th rowspan="2"/>
<th align="center" valign="top" rowspan="2">EEG frequency metrics</th>
<th align="center" valign="top" colspan="2">High-SA</th>
<th align="center" valign="top" colspan="2">Low-SA</th>
<th align="center" valign="top" colspan="2">Permutation results</th>
</tr>
<tr>
<th align="center" valign="top">Mean</th>
<th align="center" valign="top">Std.</th>
<th align="center" valign="top">Mean</th>
<th align="center" valign="top">Std.</th>
<th align="center" valign="top">Welch&#x2019;s t</th>
<th align="center" valign="top"><italic>p</italic> value</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle" rowspan="5">F region</td>
<td align="center" valign="middle">&#x03B8;/&#x03B2;</td>
<td align="center" valign="top">0.376</td>
<td align="center" valign="top">0.074</td>
<td align="center" valign="top">0.914</td>
<td align="center" valign="top">0.164</td>
<td align="center" valign="top">1.263</td>
<td align="left" valign="top">0.132</td>
</tr>
<tr>
<td align="center" valign="middle">
<bold>&#x03B1;/&#x03B2;</bold>
</td>
<td align="center" valign="top">0.457</td>
<td align="center" valign="top">0.003</td>
<td align="center" valign="top">0.752</td>
<td align="center" valign="top">0.102</td>
<td align="center" valign="top">1.786</td>
<td align="left" valign="top">
<bold>0.071</bold><sup>
<bold>b</bold>
</sup></td>
</tr>
<tr>
<td align="center" valign="middle">
<bold>&#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;)</bold>
</td>
<td align="center" valign="top">0.253</td>
<td align="center" valign="top">0.113</td>
<td align="center" valign="top">0.904</td>
<td align="center" valign="top">0.036</td>
<td align="center" valign="top">2.228</td>
<td align="left" valign="top">
<bold>0.048</bold><sup>
<bold>a</bold>
</sup></td>
</tr>
<tr>
<td align="center" valign="middle">
<bold>(&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2;</bold>
</td>
<td align="center" valign="top">0.833</td>
<td align="center" valign="top">0.176</td>
<td align="center" valign="top">1.666</td>
<td align="center" valign="top">0.241</td>
<td align="center" valign="top">2.151</td>
<td align="left" valign="top">
<bold>0.046</bold><sup>
<bold>a</bold>
</sup></td>
</tr>
<tr>
<td align="center" valign="middle">(&#x03B1;&#x2009;+&#x2009;&#x03B8;)/(&#x03B1;&#x2009;+&#x2009;&#x03B2;)</td>
<td align="center" valign="top">0.765</td>
<td align="center" valign="top">0.283</td>
<td align="center" valign="top">1.254</td>
<td align="center" valign="top">0.351</td>
<td align="center" valign="top">2.014</td>
<td align="left" valign="top">0.350</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="5">C region</td>
<td align="center" valign="top">&#x03B8;/&#x03B2;</td>
<td align="center" valign="top">0.395</td>
<td align="center" valign="top">0.024</td>
<td align="center" valign="top">0.805</td>
<td align="center" valign="top">0.245</td>
<td align="center" valign="top">1.149</td>
<td align="left" valign="top">0.147</td>
</tr>
<tr>
<td align="center" valign="top">
<bold>&#x03B1;/&#x03B2;</bold>
</td>
<td align="center" valign="top">0.462</td>
<td align="center" valign="top">0.034</td>
<td align="center" valign="top">0.812</td>
<td align="center" valign="top">0.015</td>
<td align="center" valign="top">2.353</td>
<td align="left" valign="top">
<bold>0.042</bold><sup>
<bold>a</bold>
</sup></td>
</tr>
<tr>
<td align="center" valign="top">
<bold>&#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;)</bold>
</td>
<td align="center" valign="top">0.257</td>
<td align="center" valign="top">0.029</td>
<td align="center" valign="top">0.461</td>
<td align="center" valign="top">0.054</td>
<td align="center" valign="top">2.493</td>
<td align="left" valign="top">
<bold>0.026</bold><sup>
<bold>a</bold>
</sup></td>
</tr>
<tr>
<td align="center" valign="top">
<bold>(&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2;</bold>
</td>
<td align="center" valign="top">0.857</td>
<td align="center" valign="top">0.143</td>
<td align="center" valign="top">1.617</td>
<td align="center" valign="top">0.335</td>
<td align="center" valign="top">2.362</td>
<td align="left" valign="top">
<bold>0.012</bold><sup>
<bold>a</bold>
</sup></td>
</tr>
<tr>
<td align="center" valign="top">(&#x03B1;&#x2009;+&#x2009;&#x03B8;)/(&#x03B1;&#x2009;+&#x2009;&#x03B2;)</td>
<td align="center" valign="top">0.562</td>
<td align="center" valign="top">0.167</td>
<td align="center" valign="top">0.865</td>
<td align="center" valign="top">0.264</td>
<td align="center" valign="top">0.425</td>
<td align="left" valign="top">0.618</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="5">P region</td>
<td align="center" valign="top">
<bold>&#x03B8;/&#x03B2;</bold>
</td>
<td align="center" valign="top">0.388</td>
<td align="center" valign="top">0.146</td>
<td align="center" valign="top">0.755</td>
<td align="center" valign="top">0.164</td>
<td align="center" valign="top">2.339</td>
<td align="left" valign="top">
<bold>0.044</bold><sup>
<bold>a</bold>
</sup></td>
</tr>
<tr>
<td align="center" valign="top">&#x03B1;/&#x03B2;</td>
<td align="center" valign="top">0.473</td>
<td align="center" valign="top">0.217</td>
<td align="center" valign="top">0.653</td>
<td align="center" valign="top">0.247</td>
<td align="center" valign="top">1.324</td>
<td align="left" valign="top">0.342</td>
</tr>
<tr>
<td align="center" valign="top">&#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;)</td>
<td align="center" valign="top">0.268</td>
<td align="center" valign="top">0.094</td>
<td align="center" valign="top">0.509</td>
<td align="center" valign="top">0.349</td>
<td align="center" valign="top">1.926</td>
<td align="left" valign="top">0.435</td>
</tr>
<tr>
<td align="center" valign="top">
<bold>(&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2;</bold>
</td>
<td align="center" valign="top">0.861</td>
<td align="center" valign="top">0.294</td>
<td align="center" valign="top">1.408</td>
<td align="center" valign="top">0.226</td>
<td align="center" valign="top">2.236</td>
<td align="left" valign="top">
<bold>0.038</bold><sup>
<bold>a</bold>
</sup></td>
</tr>
<tr>
<td align="center" valign="top">(&#x03B1;&#x2009;+&#x2009;&#x03B8;)/(&#x03B1;&#x2009;+&#x2009;&#x03B2;)</td>
<td align="center" valign="top">0.594</td>
<td align="center" valign="top">0.243</td>
<td align="center" valign="top">0.816</td>
<td align="center" valign="top">0.381</td>
<td align="center" valign="top">1.369</td>
<td align="left" valign="top">0.236</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="5">O region</td>
<td align="center" valign="top">&#x03B8;/&#x03B2;</td>
<td align="center" valign="top">0.373</td>
<td align="center" valign="top">0.211</td>
<td align="center" valign="top">0.846</td>
<td align="center" valign="top">0.429</td>
<td align="center" valign="top">1.721</td>
<td align="left" valign="top">0.116</td>
</tr>
<tr>
<td align="center" valign="top">&#x03B1;/&#x03B2;</td>
<td align="center" valign="top">0.457</td>
<td align="center" valign="top">0.214</td>
<td align="center" valign="top">0.772</td>
<td align="center" valign="top">0.304</td>
<td align="center" valign="top">1.355</td>
<td align="left" valign="top">0.334</td>
</tr>
<tr>
<td align="center" valign="top">
<bold>&#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;)</bold>
</td>
<td align="center" valign="top">0.264</td>
<td align="center" valign="top">0.108</td>
<td align="center" valign="top">0.624</td>
<td align="center" valign="top">0.241</td>
<td align="center" valign="top">2.486</td>
<td align="left" valign="top">
<bold>0.027</bold><sup>
<bold>a</bold>
</sup></td>
</tr>
<tr>
<td align="center" valign="top">(&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2;</td>
<td align="center" valign="top">0.581</td>
<td align="center" valign="top">0.143</td>
<td align="center" valign="top">0.956</td>
<td align="center" valign="top">0.156</td>
<td align="center" valign="top">1.583</td>
<td align="left" valign="top">0.152</td>
</tr>
<tr>
<td align="center" valign="top">(&#x03B1;&#x2009;+&#x2009;&#x03B8;)/(&#x03B1;&#x2009;+&#x2009;&#x03B2;)</td>
<td align="center" valign="top">0.534</td>
<td align="center" valign="top">0.164</td>
<td align="center" valign="top">0.721</td>
<td align="center" valign="top">0.264</td>
<td align="center" valign="top">1.188</td>
<td align="left" valign="top">0.435</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><sup>a</sup><italic>p</italic>&#x2009;&#x003C;&#x2009;0.1, <sup>b</sup><italic>p</italic>&#x2009;&#x003C;&#x2009;0.05. The bold values indicate significantly correlated metrics (i.e., <italic>p</italic> &#x003C; 0.1).</p>
</table-wrap-foot>
</table-wrap>
<p>In the statistical analysis of F region, the test results show that &#x03B1;/&#x03B2; (<italic>p</italic> =&#x2009;0.071&#x2009;&#x003C;&#x2009;0.1), &#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;) (<italic>p</italic> =&#x2009;0.048&#x2009;&#x003C;&#x2009;0.05), and (&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2; (<italic>p</italic> =&#x2009;0.046&#x2009;&#x003C;&#x2009;0.05) are correlated with the SA level. In C region, &#x03B1;/&#x03B2; (<italic>p</italic> =&#x2009;0.042&#x2009;&#x003C;&#x2009;0.05), &#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;) (<italic>p</italic> =&#x2009;0.026&#x2009;&#x003C;&#x2009;0.05), and (&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2; (<italic>p</italic> =&#x2009;0.012&#x2009;&#x003C;&#x2009;0.05) are significantly correlated with the SA level. The pilot&#x2019;s SA level also impacts the power of &#x03B8;/&#x03B2; (<italic>p</italic> =&#x2009;0.044&#x2009;&#x003C;&#x2009;0.05) and (&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2; (<italic>p</italic> =&#x2009;0.038&#x2009;&#x003C;&#x2009;0.05) in P region, and &#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;) (<italic>p</italic> =&#x2009;0.027&#x2009;&#x003C;&#x2009;0.05) in O region. The poor visibility is not conducive for pilots to obtain any necessary feedforward information related to ship collision hazards in a timely manner and take safe pilotage measures without neglecting stored materials. This demonstrates that they urgently need real-time feedforward information to understand the current situation, which imposes a higher need for the SA level. Due to the average power of the combination metrics of the low-SA group was generally higher than in the high-SA group, the descriptive statistics suggest that the low-SA pilots may be more susceptible to fluctuations in the external navigational environment.</p>
<p>Combined with the correlation evaluation of F, C, P, and O regions in poor visibility situation, these results provide the possibility for follow-up studies to identify pilots&#x2019; at-risk SA level using correlated EEG frequency combination metrics. The &#x03B1;/&#x03B2;, &#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;), and (&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2; in F and C regions may be more conducive to distinguish this ability between pilots with different SA levels. Moreover, to reduce the data volume and noise, the mean and median power of EEG frequency metrics within a sliding time window of 5&#x2009;s (i.e., epoch length) was used for data separation and feature extraction. The calculated results were then considered as the features to be selected in the subsequent identification model, as listed in <xref rid="tab2" ref-type="table">Table 2</xref>.</p>
<table-wrap position="float" id="tab2">
<label>Table 2</label>
<caption>
<p>List of calculated features.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Signal type</th>
<th align="center" valign="top">Features</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle" rowspan="4">&#x03B1;/&#x03B2;</td>
<td align="center" valign="top">Average power of F region in 5&#x2009;s (FM1)</td>
</tr>
<tr>
<td align="center" valign="top">Median power of F region in 5&#x2009;s (FP1)</td>
</tr>
<tr>
<td align="center" valign="top">Average power of C region in 5&#x2009;s (CM1)</td>
</tr>
<tr>
<td align="center" valign="top">Median power of C region in 5&#x2009;s (CP1)</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="4">&#x03B8;/(&#x03B1;&#x2009;+&#x2009;&#x03B8;)</td>
<td align="center" valign="top">Average power of F region in 5&#x2009;s (FM2)</td>
</tr>
<tr>
<td align="center" valign="top">Median power of F region in 5&#x2009;s (FP2)</td>
</tr>
<tr>
<td align="center" valign="top">Average power of C region in 5&#x2009;s (CM2)</td>
</tr>
<tr>
<td align="center" valign="top">Median power of C region in 5&#x2009;s (CP2)</td>
</tr>
<tr>
<td align="left" valign="middle" rowspan="4">(&#x03B1;&#x2009;+&#x2009;&#x03B8;)/&#x03B2;</td>
<td align="center" valign="top">Average power of F region in 5&#x2009;s (FM3)</td>
</tr>
<tr>
<td align="center" valign="top">Median power of F region in 5&#x2009;s (FP3)</td>
</tr>
<tr>
<td align="center" valign="top">Average power of C region in 5&#x2009;s (CM3)</td>
</tr>
<tr>
<td align="center" valign="top">Median power of C region in 5&#x2009;s (CP3)</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="sec19">
<title>3.3. SA identification</title>
<p>In this study, a nonlinear RF-CNN method was used for binary identification of the cognitive state, i.e., high- and low-SA levels, based on the frequency features extracted from the EEG data of the ship pilotage experiment. First, after preprocessing the data with time-frequency analysis, the EEG features associated with pilot SA levels were classified into 12 categories based on the results of the correlation evaluation, as listed in <xref rid="tab2" ref-type="table">Table 2</xref>. The features were then separated into training and testing sets before the RF-CNN method was constructed. The training set was randomly selected, accounting for 75% of the feature samples. <xref rid="fig6" ref-type="fig">Figure 6</xref> shows the optimal parameters of the RF method obtained using the grid search method. The maximum search efficiency of 0.7399 was obtained with 11 estimators and a maximum depth of 10. Subsequently, the optimal parameters of RF method were confirmed, and the initial feature importance ranking based on the average score was obtained (see <xref rid="tab3" ref-type="table">Table 3</xref>).</p>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p>Parameter optimization of the RF method.</p>
</caption>
<graphic xlink:href="fnins-17-1172103-g006.tif"/>
</fig>
<table-wrap position="float" id="tab3">
<label>Table 3</label>
<caption>
<p>Initial feature importance score and ranking.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Number</th>
<th align="center" valign="top">Feature</th>
<th align="center" valign="top">Score</th>
<th align="center" valign="top">Number</th>
<th align="center" valign="top">Feature</th>
<th align="center" valign="top">Score</th>
<th align="center" valign="top">Number</th>
<th align="center" valign="top">Feature</th>
<th align="center" valign="top">Score</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">1</td>
<td align="center" valign="middle">FM2</td>
<td align="center" valign="middle">0.225</td>
<td align="center" valign="middle">5</td>
<td align="center" valign="middle">CM1</td>
<td align="center" valign="middle">0.082</td>
<td align="center" valign="middle">9</td>
<td align="center" valign="middle">CM3</td>
<td align="center" valign="middle">0.063</td>
</tr>
<tr>
<td align="left" valign="middle">2</td>
<td align="center" valign="middle">CP1</td>
<td align="center" valign="middle">0.094</td>
<td align="center" valign="middle">6</td>
<td align="center" valign="middle">FP3</td>
<td align="center" valign="middle">0.079</td>
<td align="center" valign="middle">10</td>
<td align="center" valign="middle">FM1</td>
<td align="center" valign="middle">0.059</td>
</tr>
<tr>
<td align="left" valign="middle">3</td>
<td align="center" valign="middle">CM2</td>
<td align="center" valign="middle">0.086</td>
<td align="center" valign="middle">7</td>
<td align="center" valign="middle">CP2</td>
<td align="center" valign="middle">0.070</td>
<td align="center" valign="middle">11</td>
<td align="center" valign="middle">CP1</td>
<td align="center" valign="middle">0.051</td>
</tr>
<tr>
<td align="left" valign="middle">4</td>
<td align="center" valign="middle">FP2</td>
<td align="center" valign="middle">0.085</td>
<td align="center" valign="middle">8</td>
<td align="center" valign="middle">FM3</td>
<td align="center" valign="middle">0.064</td>
<td align="center" valign="middle">12</td>
<td align="center" valign="middle">CP3</td>
<td align="center" valign="middle">0.048</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>To prevent the overfitting of the identification model, the feature with the lowest importance score in <xref rid="tab3" ref-type="table">Table 3</xref> was eliminated, and then the remaining feature data were input back into the RF method. After 11 iterations of this feature screening process, the RMSE and relative error (RE) were obtained for 1&#x2013;11 features, as listed in <xref rid="tab4" ref-type="table">Table 4</xref>. The results show that the RMSE and RE of the identification were minimized with 10 features. Therefore, CP1 and CP3 were eliminated, and 10 valid features were retained. Moreover, because of the possibility of overlapping information in the EEG frequency combination features of the same individual, the PCA algorithm was used to reduce the dimension of the features. The first 10 features in the importance ranking were calculated by the PCA algorithm, where <inline-formula>
<mml:math id="M37">
<mml:mrow>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>-<inline-formula>
<mml:math id="M38">
<mml:mrow>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> corresponded to the features with importance ranking 1&#x2013;10. The first six principal components were selected as extraction criterion, as these gave cumulative contribution to the rate of variance of more than 87.7%. The eigenvalues of the correlation coefficient matrix and the contribution rate of the principal components are listed in <xref rid="tab5" ref-type="table">Table 5</xref>. These six principal components are:</p>
<disp-formula id="E1">
<mml:math id="M39">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1226</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0706</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.2358</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1079</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0992</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0.0693</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0716</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>7</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1063</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>8</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0650</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>9</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0625</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<disp-formula id="E2">
<mml:math id="M40">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.2015</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0979</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1352</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1052</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1147</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0.0713</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0676</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>7</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1012</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>8</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0450</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>9</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0705</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<disp-formula id="E3">
<mml:math id="M41">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1526</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1203</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0718</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1653</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1201</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0.0732</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0685</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>7</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1041</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>8</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0732</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>9</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0523</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<disp-formula id="E4">
<mml:math id="M42">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1206</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.2013</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1017</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1318</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0797</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0.0552</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0765</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>7</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0671</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>8</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1092</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>9</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0643</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<disp-formula id="E5">
<mml:math id="M43">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.2156</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1501</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0827</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1018</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0787</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0.0912</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0615</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>7</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0671</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>8</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1022</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>9</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0563</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<disp-formula id="E6">
<mml:math id="M44">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.1906</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1315</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1037</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0918</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1253</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0.0652</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0890</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>7</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0761</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>8</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1091</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mn>9</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.0573</mml:mn>
<mml:msub>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<disp-formula id="E7">
<mml:math id="M45">
<mml:mtable columnalign="left">
<mml:mtr>
<mml:mtd>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mn>0.2862</mml:mn>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1965</mml:mn>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1616</mml:mn>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mn>0.1317</mml:mn>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>4</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1142</mml:mn>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>5</mml:mn>
</mml:msub>
<mml:mo>+</mml:mo>
<mml:mn>0.1097</mml:mn>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>6</mml:mn>
</mml:msub>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
</disp-formula>
<table-wrap position="float" id="tab4">
<label>Table 4</label>
<caption>
<p>Error values with various numbers of features.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Number of features</th>
<th align="center" valign="top">RMSE</th>
<th align="center" valign="top">RE</th>
<th align="center" valign="top">Number of features</th>
<th align="center" valign="top">RMSE</th>
<th align="center" valign="top">RE</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">1</td>
<td align="center" valign="top">0.875</td>
<td align="center" valign="top">0.498</td>
<td align="center" valign="top">7</td>
<td align="center" valign="top">0.751</td>
<td align="center" valign="top">0.359</td>
</tr>
<tr>
<td align="left" valign="top">2</td>
<td align="center" valign="top">0.802</td>
<td align="center" valign="top">0.427</td>
<td align="center" valign="top">8</td>
<td align="center" valign="top">0.704</td>
<td align="center" valign="top">0.312</td>
</tr>
<tr>
<td align="left" valign="top">3</td>
<td align="center" valign="top">0.765</td>
<td align="center" valign="top">0.396</td>
<td align="center" valign="top">9</td>
<td align="center" valign="top">0.675</td>
<td align="center" valign="top">0.297</td>
</tr>
<tr>
<td align="left" valign="top">4</td>
<td align="center" valign="top">0.735</td>
<td align="center" valign="top">0.371</td>
<td align="center" valign="top">10</td>
<td align="center" valign="top">0.627</td>
<td align="center" valign="top">0.209</td>
</tr>
<tr>
<td align="left" valign="top">5</td>
<td align="center" valign="top">0.732</td>
<td align="center" valign="top">0.363</td>
<td align="center" valign="top">11</td>
<td align="center" valign="top">0.702</td>
<td align="center" valign="top">0.282</td>
</tr>
<tr>
<td align="left" valign="top">6</td>
<td align="center" valign="top">0.741</td>
<td align="center" valign="top">0.370</td>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap position="float" id="tab5">
<label>Table 5</label>
<caption>
<p>Principal components of features.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Component</th>
<th align="center" valign="top">Eigenvalues</th>
<th align="center" valign="top">Contribution (%)</th>
<th align="center" valign="top">Cumulative contribution (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">F<sub>1</sub></td>
<td align="center" valign="top">5.193</td>
<td align="center" valign="top">25.108</td>
<td align="center" valign="top">25.108</td>
</tr>
<tr>
<td align="left" valign="top">F<sub>2</sub></td>
<td align="center" valign="top">4.109</td>
<td align="center" valign="top">17.233</td>
<td align="center" valign="top">42.341</td>
</tr>
<tr>
<td align="left" valign="top">F<sub>3</sub></td>
<td align="center" valign="top">1.825</td>
<td align="center" valign="top">14.176</td>
<td align="center" valign="top">56.517</td>
</tr>
<tr>
<td align="left" valign="top">F<sub>4</sub></td>
<td align="center" valign="top">1.372</td>
<td align="center" valign="top">11.548</td>
<td align="center" valign="top">68.065</td>
</tr>
<tr>
<td align="left" valign="top">F<sub>5</sub></td>
<td align="center" valign="top">3.014</td>
<td align="center" valign="top">10.016</td>
<td align="center" valign="top">78.081</td>
</tr>
<tr>
<td align="left" valign="top">F<sub>6</sub></td>
<td align="center" valign="top">2.917</td>
<td align="center" valign="top">9.624</td>
<td align="center" valign="top">87.705</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The first six principal components were extracted in the form of feature combinations and used to construct new input sets for the CNN modified model. Before the model training, the initial hyperparameter values were preset, including the batch size set to 20, the convolutional kernel set to 5&#x2009;&#x00D7;&#x2009;5, the maximum pooling to 3&#x2009;&#x00D7;&#x2009;3, the network learning rate to 0.1 and the weight decay parameter to 0.0001. The results indicate that the learning rate is reduced by 50% when the initial increment of the loss function is greater than 25%. <xref rid="fig7" ref-type="fig">Figure 7A</xref> shows the trained loss values reach the expected level when the iterations exceed 900, and the convergence rate and loss values of the CNN model modified by PCA are better than those of the traditional CNN method. Further, the identification accuracy of the two methods is compared, and <xref rid="fig7" ref-type="fig">Figure 7B</xref> shows that the identification rate of CNN modified method remains around 84.8% after more than 1,200 iterations. Thus, in the poor visibility, the PCA method contributes to improve the convergence rate and identification accuracy of the traditional CNN method is verified.</p>
<fig position="float" id="fig7">
<label>Figure 7</label>
<caption>
<p>The loss value <bold>(A)</bold> and accuracy <bold>(B)</bold> of CNN modified method training.</p>
</caption>
<graphic xlink:href="fnins-17-1172103-g007.tif"/>
</fig>
<p>In the comparative analysis of the RF, CNN, and RF-CNN methods, three performance evaluation metrics were used to evaluate the effectiveness of the optimized feature combinations. <xref rid="fig8" ref-type="fig">Figure 8</xref> presents the distribution of the classification accuracy measure for the three classifiers using different performance metrics. In each box, the bottom and top edges indicate the 25th and 75th percentiles, respectively, and the center mark denotes the median. Using the optimal EEG frequency feature combinations in the RF-CNN methods, the average ACC over 1,200 calculations reached 0.848, while the TPR was 0.894 and the TNR was 0.860. Moreover, to facilitate the evaluation of the classification performance of the method, the Matthews correlation coefficient (MCC), F1-score and Kappa indexes were selected for analysis and found to be within reasonable values. In general, the RF-CNN outperformed the RF and CNN methods without feature optimization.</p>
<fig position="float" id="fig8">
<label>Figure 8</label>
<caption>
<p>Performance evaluation of the three methods.</p>
</caption>
<graphic xlink:href="fnins-17-1172103-g008.tif"/>
</fig>
<p>Because the receiver operating characteristic (ROC) curves can intuitively observe the accuracy of the classification method through the graph, and usually combined with under the curve (AUC) to solve the evaluation problem of binary classification (<xref ref-type="bibr" rid="ref32">Omar and Ivrissimtzis, 2019</xref>). With the optimal RF-CNN parameters, we retrained and retested ROC and the area AUC of the RF-CNN algorithms for comparative analysis. The results are shown in <xref rid="fig9" ref-type="fig">Figure 9</xref>. The AUC scores were 0.875, 0.867, and 0.924 for the RF, CNN, and RF-CNN algorithms, respectively, demonstrating the better stability of the RF-CNN. The superior sensitivity and specificity of the RF-CNN algorithms are verified by the TPR and TNR scores, respectively. <xref rid="tab6" ref-type="table">Table 6</xref> reports the performance of the three classification algorithms using the evaluation methodology described above. Using the optimized features as input data, the RF-CNN algorithm achieved an average accuracy of 0.848, average sensitivity of 0.894, average specificity of 0.860, and an AUC score of 0.924. These results demonstrate that the RF-CNN with optimized parameters achieves better performance than traditional algorithms in terms of the identification of EEG frequency combination features with different SA levels. This provides an important cognitive avenue for the construction of a screening and evaluation model for pilots&#x2019; competency.</p>
<fig position="float" id="fig9">
<label>Figure 9</label>
<caption>
<p>ROC curve of the three methods.</p>
</caption>
<graphic xlink:href="fnins-17-1172103-g009.tif"/>
</fig>
<table-wrap position="float" id="tab6">
<label>Table 6</label>
<caption>
<p>Comparative analysis of performance metrics.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Classification algorithm</th>
<th align="center" valign="top">Accuracy</th>
<th align="center" valign="top">Sensitivity</th>
<th align="center" valign="top">Specificity</th>
<th align="center" valign="top">AUC</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">RF</td>
<td align="center" valign="middle">0.781</td>
<td align="center" valign="middle">0.857</td>
<td align="center" valign="middle">0.855</td>
<td align="center" valign="middle">0.875</td>
</tr>
<tr>
<td align="left" valign="middle">CNN</td>
<td align="center" valign="middle">0.816</td>
<td align="center" valign="middle">0.815</td>
<td align="center" valign="middle">0.827</td>
<td align="center" valign="middle">0.867</td>
</tr>
<tr>
<td align="left" valign="middle">RF-CNN</td>
<td align="center" valign="middle">0.848</td>
<td align="center" valign="middle">0.894</td>
<td align="center" valign="middle">0.860</td>
<td align="center" valign="middle">0.924</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="sec20" sec-type="discussions">
<title>4. Discussion</title>
<p>The results highlight that the RF-CNN method achieves good SA identification using six principal components of EEG features (<xref rid="tab5" ref-type="table">Table 5</xref>). The EEG data of 25 pilots were collected during the whole experimental process based on pilots&#x2019; competency in poor visibility situations. Moreover, the poor visibility was designed to appear randomly in normal navigation during the simulation experiment. With respect to the fundamental analysis of EEG data, a permutation simulation was used to quantify the significance of the correlation between EEG cognitive state, as visualized in the form of brain activity and time-frequency analysis, and SA levels, as measured by SART. As the window width of short-time Fourier transform (STFT) is fixed and cannot be adaptively adjusted, it is usually applicable to the analysis of stable signals. WT is then proposed to observe the partial features of the signal through time windows of adaptive widths varying with frequency, but is often considered as a single power signal in the spatial dimension. Therefore, combining the time-frequency and spatial features of EEG signals, the frequency combination features of different brain regions in the time dimension are extracted for SA identification. The EEG frequency features were then extracted from the associated metrics and divided into overlapping 5&#x2009;s epochs after digital and time-domain filtering. To prevent possible data overfitting and information overlap, the initial number of input features was determined to be 10 based on the minimum RMSE and RE values of the RF model, and the PCA algorithm was used for further EEG feature combination to obtain six major components as the final input set of the CNN model. Moreover, PCA was applied to optimize the network structure of the CNN model to improve the training convergence efficiency and accuracy, and the feasibility of the above methods has been verified by comparing the results with traditional methods (see 3.3 for details). The RF-CNN method was then used to obtain the identification accuracy of the at-risk cognitive competency (i.e., low SA level). Comparative analysis of the three-performance metrics shows that the proposed method outperforms two other traditional methods that do not use feature optimization.</p>
<p>The results of this study confirm that the at-risk cognitive state can be effectively detected with high accuracy, sensitivity, and specificity using EEG frequency features combined with an appropriate identification framework. The identification results obtained by the RF-CNN, RF, and CNN algorithms with different types of features are shown in <xref rid="fig8" ref-type="fig">Figure 8</xref>. We can observe that a combination of the top-nine most important features facilitates an improvement in classification accuracy, from 78.1% using the RF without feature optimization to 84.8% using RF-CNN. Comparing the RF and CNN algorithms, which used the same feature data as their inputs, the latter produced better ACC, TPR, and TNR values, which verifies the applicability of the CNN for nonlinear multidimensional sample sets of EEG data in poor visibility situations. Therefore, we can conclude that an effective RF-CNN method for identifying at-risk cognitive state should include a strong classifier after PCA optimization (i.e., CNN modified module) and input data that have been refined by the combination of salient EEG frequency features in different brain regions (i.e., RF module).</p>
<p>An accurate comparison with the results from other studies in which EEG was used is difficult, because each study used a different method of simulation, a different set of EEG features, and a different classification of SA groups. The main limitation of the current study involves the performance issue of EEG acquisition devices. When the pilot is in a simulation experiment, the EEG devices are susceptible to environmental factors and physiological artifacts that generate large amounts of clutter. Although digital and time-domain filtering helps with feature extraction, it inevitably affects the significance level of the correlation results. Moreover, regarding feature extraction, the identification accuracy is associated with the correlation metrics and epoch lengths of EEG data in poor visibility situations. This study shows that pilots with low SA levels are susceptible to environmental risk factors resulting in significant EEG fluctuations, as expressed by a total of 14 EEG frequency combination features in F and C regions with an epoch length of 5&#x2009;s. However, the SA level is not only correlated with EEG time-frequency features but is also related to other physiological measurement metrics, as confirmed in previous studies (<xref ref-type="bibr" rid="ref29">Mehta et al., 2018</xref>; <xref ref-type="bibr" rid="ref49">Zahabi et al., 2021</xref>). Therefore, the identification processes of at-risk cognitive states are complicated, and require further investigation considering multiple fusion metrics such as heart rate variability (HRV), electrocardiograph (ECG) and eye-tracking signals with different epoch lengths, as well as multiple classifications of SA groups (<xref ref-type="bibr" rid="ref34">Paulus and Remijn, 2021</xref>; <xref ref-type="bibr" rid="ref47">Xiong et al., 2021</xref>; <xref ref-type="bibr" rid="ref40">Ren et al., 2022</xref>).</p>
</sec>
<sec id="sec21" sec-type="conclusions">
<title>5. Conclusion</title>
<p>The results using the proposed RF-CNN method confirm that it is feasible to identify pilots&#x2019; at-risk cognitive competency (i.e., low SA levels) using EEG features. Specifically, the EEG data of 25 ship pilots were obtained from bridge simulation experiments in poor visibility situations for the training of the identification model, containing RF, modified CNN and validation modules. Six EEG principal components were produced by the PCA after RMSE and RF correction to obtain the optimal model training sets, with the cumulative contribution rate of more than 87.7%. The experimental results demonstrate that the proposed feature combinations enhance the classification performance over that of RF and CNN, which do not employ features optimization, demonstrating the potential for our approach to be used in the computer-aided screening of pilots&#x2019; cognitive competency. Considering the universality of the proposed method, future research will focus on the verification and application of the identification model in different emergency situations (e.g., ship departure, two-ship crossing, and anchoring) with multiple fusion metrics (e.g., HRV, eye-tracking). Therefore, this study yields not only immediate benefits in monitoring cognitive competency and preventing unsafe behaviors, but also long-term benefits in opening new avenues for the construction of evaluation systems for the physical and mental competency of ship pilots.</p>
</sec>
<sec id="sec22" sec-type="data-availability">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="sec23">
<title>Ethics statement</title>
<p>Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="sec24">
<title>Author contributions</title>
<p>All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p>
</sec>
<sec id="sec25" sec-type="funding-information">
<title>Funding</title>
<p>This work was supported by the National Natural Science Foundation of China (Grant No. 71503166) and the Jinhua municipal public welfare technology application research project (Grant No. 2022-4-014).</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="sec100" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="ref1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Admiraal</surname> <given-names>M. M.</given-names></name> <name><surname>Ramos</surname> <given-names>L. A.</given-names></name> <name><surname>Delgado Olabarriaga</surname> <given-names>S.</given-names></name> <name><surname>Marquering</surname> <given-names>H. A.</given-names></name> <name><surname>Horn</surname> <given-names>J.</given-names></name> <name><surname>van Rootselaar</surname> <given-names>A. F.</given-names></name></person-group> (<year>2021</year>). <article-title>Quantitative analysis of EEG reactivity for neurological prognostication after cardiac arrest</article-title>. <source>Clin. Neurophysiol.</source> <volume>132</volume>, <fpage>2240</fpage>&#x2013;<lpage>2247</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.clinph.2021.07.004</pub-id>, PMID: <pub-id pub-id-type="pmid">34315065</pub-id></citation>
</ref>
<ref id="ref2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aggarwal</surname> <given-names>S.</given-names></name> <name><surname>Chugh</surname> <given-names>N.</given-names></name></person-group> (<year>2022</year>). <article-title>Review of machine learning techniques for EEG based brain computer interface</article-title>. <source>Comput. Method Eng.</source> <volume>29</volume>, <fpage>3001</fpage>&#x2013;<lpage>3020</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11831-021-09684-6</pub-id></citation>
</ref>
<ref id="ref3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Antao</surname> <given-names>P.</given-names></name> <name><surname>Soares</surname> <given-names>C. G.</given-names></name></person-group> (<year>2019</year>). <article-title>Analysis of the influence of human errors on the occurrence of coastal ship accidents in different wave conditions using bayesian belief networks</article-title>. <source>Accid. Anal. Prev.</source> <volume>133</volume>, <fpage>105262</fpage>&#x2013;<lpage>105217</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.aap.2019.105262</pub-id></citation>
</ref>
<ref id="ref5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bye</surname> <given-names>R. J.</given-names></name> <name><surname>Aalberg</surname> <given-names>A. L.</given-names></name></person-group> (<year>2018</year>). <article-title>Maritime navigation accidents and risk indicators: an exploratory statistical analysis using AIS data and accident reports</article-title>. <source>Reliab. Eng. Syst. Saf.</source> <volume>176</volume>, <fpage>174</fpage>&#x2013;<lpage>186</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.ress.2018.03.033</pub-id></citation>
</ref>
<ref id="ref6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chauvin</surname> <given-names>C.</given-names></name> <name><surname>Lardjane</surname> <given-names>S.</given-names></name> <name><surname>Morel</surname> <given-names>G. L.</given-names></name> <name><surname>Clostermann</surname> <given-names>J. P.</given-names></name> <name><surname>Langard</surname> <given-names>B.</given-names></name></person-group> (<year>2013</year>). <article-title>Human and organisational factors in maritime accidents: analysis of collisions at sea using the HFACS</article-title>. <source>Accid. Anal. Prev.</source> <volume>59</volume>, <fpage>26</fpage>&#x2013;<lpage>37</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.aap.2013.05.006</pub-id>, PMID: <pub-id pub-id-type="pmid">23764875</pub-id></citation>
</ref>
<ref id="ref7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Darbra</surname> <given-names>R. M.</given-names></name> <name><surname>Crawford</surname> <given-names>J. F. E.</given-names></name> <name><surname>Haley</surname> <given-names>C. W.</given-names></name> <name><surname>Morrison</surname> <given-names>R. J.</given-names></name></person-group> (<year>2007</year>). <article-title>Safety culture and hazard risk perception of Australian and New Zealand maritime pilots</article-title>. <source>Mar. Policy</source> <volume>31</volume>, <fpage>736</fpage>&#x2013;<lpage>745</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.marpol.2007.02.004</pub-id></citation>
</ref>
<ref id="ref8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>di Flumeri</surname> <given-names>G.</given-names></name> <name><surname>Borghini</surname> <given-names>G.</given-names></name> <name><surname>Aric&#x00F2;</surname> <given-names>P.</given-names></name> <name><surname>Sciaraffa</surname> <given-names>N.</given-names></name> <name><surname>Lanzi</surname> <given-names>P.</given-names></name> <name><surname>Pozzi</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>EEG-based mental workload neurometric to evaluate the impact of different traffic and road conditions in real driving settings</article-title>. <source>Front. Hum. Neurosci.</source> <volume>12</volume>:<fpage>509</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnhum.2018.00509</pub-id></citation>
</ref>
<ref id="ref9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dimitriadis</surname> <given-names>S. I.</given-names></name> <name><surname>Laskaris</surname> <given-names>N. A.</given-names></name> <name><surname>Tsirka</surname> <given-names>V.</given-names></name> <name><surname>Vourkas</surname> <given-names>M.</given-names></name> <name><surname>Micheloyannis</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>What does delta band tell us about cognitive processes: a mental calculation study</article-title>. <source>Neurosci. Lett.</source> <volume>483</volume>, <fpage>11</fpage>&#x2013;<lpage>15</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neulet.2010.07.034</pub-id>, PMID: <pub-id pub-id-type="pmid">20654696</pub-id></citation>
</ref>
<ref id="ref10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Endsley</surname> <given-names>M. R.</given-names></name></person-group> (<year>2015</year>). <article-title>Situation awareness: operationally necessary and scientifically grounded</article-title>. <source>Cogn. Tech. Work</source> <volume>17</volume>, <fpage>163</fpage>&#x2013;<lpage>167</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10111-015-0323-5</pub-id></citation>
</ref>
<ref id="ref11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fan</surname> <given-names>X. L.</given-names></name> <name><surname>Feng</surname> <given-names>X. F.</given-names></name> <name><surname>Dong</surname> <given-names>Y. Y.</given-names></name></person-group> (<year>2022</year>). <article-title>COVID-19 CT image recognition algorithm based on transformer and CNN</article-title>. <source>Displays</source> <volume>72</volume>, <fpage>102150</fpage>&#x2013;<lpage>102159</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.displa.2022.102150</pub-id></citation>
</ref>
<ref id="ref12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Genuer</surname> <given-names>R.</given-names></name> <name><surname>Poggi</surname> <given-names>J. M.</given-names></name> <name><surname>Tuleau-Malot</surname> <given-names>C.</given-names></name></person-group> (<year>2010</year>). <article-title>Variable selection using random forests</article-title>. <source>Pattern Recogn. Lett.</source> <volume>31</volume>, <fpage>2225</fpage>&#x2013;<lpage>2236</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.patrec.2010.03.014</pub-id></citation>
</ref>
<ref id="ref13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guti&#x00E9;rrez</surname> <given-names>D.</given-names></name> <name><surname>Ram&#x00ED;rez-Moreno</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Assessing a learning process with functional ANOVA estimators of EEG power spectral densities</article-title>. <source>Cogn. Neurodyn.</source> <volume>10</volume>, <fpage>175</fpage>&#x2013;<lpage>183</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11571-015-9368-7</pub-id>, PMID: <pub-id pub-id-type="pmid">27066154</pub-id></citation>
</ref>
<ref id="ref14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hersche</surname> <given-names>M.</given-names></name> <name><surname>Benini</surname> <given-names>L.</given-names></name> <name><surname>Rahimi</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Binarization methods for motor-imagery brain-computer interface classification</article-title>. <source>IEEE J. Emerg. Select. Top. Circ. Syst.</source> <volume>10</volume>, <fpage>567</fpage>&#x2013;<lpage>577</lpage>. doi: <pub-id pub-id-type="doi">10.1109/JETCAS.2020.3031698</pub-id></citation>
</ref>
<ref id="ref15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hetherington</surname> <given-names>C.</given-names></name> <name><surname>Flin</surname> <given-names>R.</given-names></name> <name><surname>Mearns</surname> <given-names>K.</given-names></name></person-group> (<year>2006</year>). <article-title>Safety in shipping: the human element</article-title>. <source>J. Saf. Res.</source> <volume>37</volume>, <fpage>401</fpage>&#x2013;<lpage>411</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.jsr.2006.04.007</pub-id></citation>
</ref>
<ref id="ref16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname> <given-names>H. F.</given-names></name> <name><surname>Liao</surname> <given-names>Z. K.</given-names></name> <name><surname>Xiao</surname> <given-names>X.</given-names></name></person-group> (<year>2019</year>). <article-title>Action recognition using multiple pooling strategies of CNN features</article-title>. <source>Neural. Process. Lett.</source> <volume>50</volume>, <fpage>379</fpage>&#x2013;<lpage>396</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11063-018-9932-3</pub-id></citation>
</ref>
<ref id="ref17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ieracitano</surname> <given-names>C.</given-names></name> <name><surname>Mammone</surname> <given-names>N.</given-names></name> <name><surname>Bramanti</surname> <given-names>A.</given-names></name> <name><surname>Hussain</surname> <given-names>A.</given-names></name> <name><surname>Morabito</surname> <given-names>F. C.</given-names></name></person-group> (<year>2019</year>). <article-title>A convolutional neural network approach for classification of dementia stages based on 2D-spectral representation of EEG recordings</article-title>. <source>Neurocomputing</source> <volume>323</volume>, <fpage>96</fpage>&#x2013;<lpage>107</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neucom.2018.09.071</pub-id></citation>
</ref>
<ref id="ref18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iqbal</surname> <given-names>M. U.</given-names></name> <name><surname>Shahab</surname> <given-names>M. A.</given-names></name> <name><surname>Choudhary</surname> <given-names>M.</given-names></name> <name><surname>Srinivasan</surname> <given-names>B.</given-names></name> <name><surname>Srinivasan</surname> <given-names>R.</given-names></name></person-group> (<year>2021</year>). <article-title>Electroencephalography (EEG) based cognitive measures for evaluating the effectiveness of operator training</article-title>. <source>Process Saf. Environ. Prot.</source> <volume>150</volume>, <fpage>51</fpage>&#x2013;<lpage>67</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.psep.2021.03.050</pub-id></citation>
</ref>
<ref id="ref19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jiang</surname> <given-names>S.</given-names></name> <name><surname>Chen</surname> <given-names>W.</given-names></name> <name><surname>Kang</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>Correlation evaluation of pilots&#x2019; situation awareness in bridge simulations via eye-tracking technology</article-title>. <source>Comput. Intell. Neurosci.</source> <volume>2021</volume>, <fpage>1</fpage>&#x2013;<lpage>15</lpage>. doi: <pub-id pub-id-type="doi">10.1155/2021/7122437</pub-id></citation>
</ref>
<ref id="ref20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jie</surname> <given-names>X.</given-names></name> <name><surname>Rui</surname> <given-names>C.</given-names></name> <name><surname>Li</surname> <given-names>L.</given-names></name></person-group> (<year>2014</year>). <article-title>Emotion recognition based on the sample entropy of EEG</article-title>. <source>Biomed. Mater. Eng.</source> <volume>24</volume>, <fpage>1185</fpage>&#x2013;<lpage>1192</lpage>. doi: <pub-id pub-id-type="doi">10.3233/BME-130919</pub-id>, PMID: <pub-id pub-id-type="pmid">24212012</pub-id></citation>
</ref>
<ref id="ref21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>K&#x00E4;stle</surname> <given-names>J. L.</given-names></name> <name><surname>Anvari</surname> <given-names>B.</given-names></name> <name><surname>Krol</surname> <given-names>J.</given-names></name> <name><surname>Wurdemann</surname> <given-names>H. A.</given-names></name></person-group> (<year>2021</year>). <article-title>Correlation between situational awareness and EEG signals</article-title>. <source>Neurocomputing</source> <volume>432</volume>, <fpage>70</fpage>&#x2013;<lpage>79</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neucom.2020.12.026</pub-id></citation>
</ref>
<ref id="ref22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kaur</surname> <given-names>H.</given-names></name> <name><surname>Kaur</surname> <given-names>N.</given-names></name> <name><surname>Neeru</surname> <given-names>N.</given-names></name></person-group> (<year>2022</year>). <article-title>Evolution of multiorgan segmentation techniques from traditional to deep learning in abdominal CT images-A systematic review</article-title>. <source>Displays</source> <volume>73</volume>, <fpage>102223</fpage>&#x2013;<lpage>102221</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.displa.2022.102223</pub-id></citation>
</ref>
<ref id="ref23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klaproth</surname> <given-names>O. W.</given-names></name> <name><surname>Vernaleken</surname> <given-names>C.</given-names></name> <name><surname>Krol</surname> <given-names>L. R.</given-names></name> <name><surname>Halbruegge</surname> <given-names>M.</given-names></name> <name><surname>Zander</surname> <given-names>T. O.</given-names></name> <name><surname>Russwinkel</surname> <given-names>N.</given-names></name></person-group> (<year>2020</year>). <article-title>Tracing pilots' situation assessment by neuroadaptive cognitive modeling</article-title>. <source>Front. Neurosci.</source> <volume>14</volume>:<fpage>795</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fnins.2020.00795</pub-id></citation>
</ref>
<ref id="ref24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>M. A.</given-names></name> <name><surname>Han</surname> <given-names>J. F.</given-names></name> <name><surname>Yang</surname> <given-names>J. F.</given-names></name></person-group> (<year>2021</year>). <article-title>Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN</article-title>. <source>Med. Biol. Eng. Comput.</source> <volume>59</volume>, <fpage>2037</fpage>&#x2013;<lpage>2050</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11517-021-02396-w</pub-id>, PMID: <pub-id pub-id-type="pmid">34424453</pub-id></citation>
</ref>
<ref id="ref25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>F. H.</given-names></name> <name><surname>Huang</surname> <given-names>W.</given-names></name> <name><surname>Luo</surname> <given-names>M. Y.</given-names></name> <name><surname>Zhang</surname> <given-names>P.</given-names></name> <name><surname>Zha</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>A new VAE-GAN model to synthesize arterial spin labeling images from structural MRI</article-title>. <source>Displays</source> <volume>70</volume>:<fpage>102079</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.displa.2021.102079</pub-id></citation>
</ref>
<ref id="ref26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lo</surname> <given-names>J. C.</given-names></name> <name><surname>Sehic</surname> <given-names>E.</given-names></name> <name><surname>Brookhuis</surname> <given-names>K. A.</given-names></name> <name><surname>Meijer</surname> <given-names>S. A.</given-names></name></person-group> (<year>2016</year>). <article-title>Explicit or implicit situation awareness? Measuring the situation awareness of train traffic controllers</article-title>. <source>Trans. Res. Part F-Traffic Psychol. Behav.</source> <volume>43</volume>, <fpage>325</fpage>&#x2013;<lpage>338</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.trf.2016.09.006</pub-id></citation>
</ref>
<ref id="ref27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>R. H.</given-names></name> <name><surname>Ge</surname> <given-names>Y. S.</given-names></name> <name><surname>Hu</surname> <given-names>Z. L.</given-names></name> <name><surname>Liang</surname> <given-names>D.</given-names></name> <name><surname>Li</surname> <given-names>Z. C.</given-names></name></person-group> (<year>2021</year>). <article-title>DeepPhase: learning phase contrast signal from dual energy X-ray absorption images</article-title>. <source>Displays</source> <volume>69</volume>, <fpage>102027</fpage>&#x2013;<lpage>102026</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.displa.2021.102027</pub-id></citation>
</ref>
<ref id="ref28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McDonald</surname> <given-names>A. D.</given-names></name> <name><surname>Ferris</surname> <given-names>T. K.</given-names></name> <name><surname>Wiener</surname> <given-names>T. A.</given-names></name></person-group> (<year>2020</year>). <article-title>Classification of driver distraction: a comprehensive analysis of feature generation, machine learning, and input measures</article-title>. <source>Hum. Factors</source> <volume>62</volume>, <fpage>1019</fpage>&#x2013;<lpage>1035</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0018720819856454</pub-id>, PMID: <pub-id pub-id-type="pmid">31237788</pub-id></citation>
</ref>
<ref id="ref29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mehta</surname> <given-names>R. K.</given-names></name> <name><surname>Peres</surname> <given-names>S. C.</given-names></name> <name><surname>Shortz</surname> <given-names>A. E.</given-names></name> <name><surname>Hoyle</surname> <given-names>W.</given-names></name> <name><surname>Lee</surname> <given-names>M.</given-names></name> <name><surname>Saini</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Operator situation awareness and physiological states during offshore well control scenarios</article-title>. <source>J. Loss Prev. Process Ind.</source> <volume>55</volume>, <fpage>332</fpage>&#x2013;<lpage>337</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.jlp.2018.07.010</pub-id></citation>
</ref>
<ref id="ref30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mohammadfam</surname> <given-names>I.</given-names></name> <name><surname>Mirzaei Aliabadi</surname> <given-names>M.</given-names></name> <name><surname>Soltanian</surname> <given-names>A. R.</given-names></name> <name><surname>Tabibzadeh</surname> <given-names>M.</given-names></name> <name><surname>Mahdinia</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>Investigating interactions among vital variables affecting situation awareness based on fuzzy DEMATEL method</article-title>. <source>Int. J. Ind. Ergon.</source> <volume>74</volume>, <fpage>102842</fpage>&#x2013;<lpage>102810</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.ergon.2019.102842</pub-id></citation>
</ref>
<ref id="ref31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mosier</surname> <given-names>K. L.</given-names></name> <name><surname>Fischer</surname> <given-names>U.</given-names></name> <name><surname>Morrow</surname> <given-names>D.</given-names></name> <name><surname>Feigh</surname> <given-names>K. M.</given-names></name> <name><surname>Durso</surname> <given-names>F. T.</given-names></name> <name><surname>Sullivan</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Automation, task, and context features: impacts on pilots&#x2019; judgments of human-automation interaction</article-title>. <source>J. Cogn. Eng. Decis. Mak</source> <volume>7</volume>, <fpage>377</fpage>&#x2013;<lpage>399</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1555343413487178</pub-id></citation>
</ref>
<ref id="ref32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Omar</surname> <given-names>L.</given-names></name> <name><surname>Ivrissimtzis</surname> <given-names>I.</given-names></name></person-group> (<year>2019</year>). <article-title>Using theoretical ROC curves for analysing machine learning binary classifiers</article-title>. <source>Pattern Recogn. Lett.</source> <volume>128</volume>, <fpage>447</fpage>&#x2013;<lpage>451</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.patrec.2019.10.004</pub-id></citation>
</ref>
<ref id="ref33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pan</surname> <given-names>H.</given-names></name> <name><surname>Liu</surname> <given-names>X. B.</given-names></name> <name><surname>Cai</surname> <given-names>X. Y.</given-names></name> <name><surname>Lai</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>Classification of schizophrenia EEG based on gamma-band brain network</article-title>. <source>Int. J. Psychophysiol.</source> <volume>168</volume>, <fpage>S130</fpage>&#x2013;<lpage>S131</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2021.07.376</pub-id></citation>
</ref>
<ref id="ref34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paulus</surname> <given-names>Y. T.</given-names></name> <name><surname>Remijn</surname> <given-names>G. B.</given-names></name></person-group> (<year>2021</year>). <article-title>Usability of various dwell times for eye-gaze-based object selection with eye tracking</article-title>. <source>Displays</source> <volume>67</volume>, <fpage>101997</fpage>&#x2013;<lpage>101999</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.displa.2021.101997</pub-id></citation>
</ref>
<ref id="ref35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pei</surname> <given-names>H. N.</given-names></name> <name><surname>Huang</surname> <given-names>X. Q.</given-names></name> <name><surname>Ding</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>Image visualization: dynamic and static images generate users' visual cognitive experience using eye-tracking technology</article-title>. <source>Displays</source> <volume>73</volume>, <fpage>102175</fpage>&#x2013;<lpage>102114</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.displa.2022.102175</pub-id></citation>
</ref>
<ref id="ref36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perez-Valero</surname> <given-names>E.</given-names></name> <name><surname>Lopez-Gordo</surname> <given-names>M. A.</given-names></name> <name><surname>Vaquero-Blasco</surname> <given-names>M. A.</given-names></name></person-group> (<year>2021</year>). <article-title>EEG-based multi-level stress classification with and without smoothing filter</article-title>. <source>Biomed. Signal Proc. Control</source> <volume>69</volume>, <fpage>102881</fpage>&#x2013;<lpage>102888</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.bspc.2021.102881</pub-id></citation>
</ref>
<ref id="ref37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Puma</surname> <given-names>S.</given-names></name> <name><surname>Matton</surname> <given-names>N.</given-names></name> <name><surname>Paubel</surname> <given-names>P. V.</given-names></name> <name><surname>Raufaste</surname> <given-names>&#x00C9;.</given-names></name> <name><surname>el-Yagoubi</surname> <given-names>R.</given-names></name></person-group> (<year>2018</year>). <article-title>Using theta and alpha band power to assess cognitive workload in multitasking environments</article-title>. <source>Int. J. Psychophysiol.</source> <volume>123</volume>, <fpage>111</fpage>&#x2013;<lpage>120</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2017.10.004</pub-id>, PMID: <pub-id pub-id-type="pmid">29017780</pub-id></citation>
</ref>
<ref id="ref38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Quan</surname> <given-names>S. C.</given-names></name> <name><surname>Chen</surname> <given-names>H.</given-names></name> <name><surname>Lin</surname> <given-names>L. Y.</given-names></name> <name><surname>Shi</surname> <given-names>Z.</given-names></name> <name><surname>Ying</surname> <given-names>H.</given-names></name> <name><surname>Yuan</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Automatic CT whole-lung segmentation in radiomics discrimination: methodology and application in pneumonia diagnosis and distinguishment</article-title>. <source>Displays</source> <volume>71</volume>, <fpage>102144</fpage>&#x2013;<lpage>102147</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.displa.2021.102144</pub-id></citation>
</ref>
<ref id="ref39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rabcan</surname> <given-names>J.</given-names></name> <name><surname>Levashenko</surname> <given-names>V.</given-names></name> <name><surname>Zaitseva</surname> <given-names>E.</given-names></name> <name><surname>Kvassay</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>EEG signal classification based on fuzzy classifiers</article-title>. <source>IEEE Trans. Indus. Inform.</source> <volume>18</volume>, <fpage>757</fpage>&#x2013;<lpage>766</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TII.2021.3084352</pub-id></citation>
</ref>
<ref id="ref40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ren</surname> <given-names>J. W.</given-names></name> <name><surname>Yao</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Jiang</surname> <given-names>H. Y.</given-names></name> <name><surname>Zhao</surname> <given-names>X. C.</given-names></name></person-group> (<year>2022</year>). <article-title>Recognition efficiency of atypical cardiovascular readings on ECG devices through fogged goggles</article-title>. <source>Displays</source> <volume>72</volume>:<fpage>102148</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.displa.2021.102148</pub-id></citation>
</ref>
<ref id="ref41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saini</surname> <given-names>N.</given-names></name> <name><surname>Bhardwaj</surname> <given-names>S.</given-names></name> <name><surname>Agarwal</surname> <given-names>R.</given-names></name></person-group> (<year>2020</year>). <article-title>Classification of EEG signals using hybrid combination of features for lie detection</article-title>. <source>Neural Comput. Applic.</source> <volume>32</volume>, <fpage>3777</fpage>&#x2013;<lpage>3787</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00521-019-04078-z</pub-id></citation>
</ref>
<ref id="ref4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Scornet</surname> <given-names>E.</given-names></name> <name><surname>Biau</surname> <given-names>G.</given-names></name> <name><surname>Vert</surname> <given-names>J. P.</given-names></name></person-group> (<year>2015</year>). <article-title>Consistency of random forests[J]</article-title>. <source>Analysis of Statistics</source> <volume>43</volume>, <fpage>1716</fpage>&#x2013;<lpage>1741</lpage>. doi: <pub-id pub-id-type="doi">10.1214/15-AOS1321</pub-id></citation>
</ref>
<ref id="ref42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Srinivasan</surname> <given-names>R.</given-names></name> <name><surname>Srinivasan</surname> <given-names>B.</given-names></name> <name><surname>Iqbal</surname> <given-names>M. U.</given-names></name> <name><surname>Nemet</surname> <given-names>A.</given-names></name> <name><surname>Kravanja</surname> <given-names>Z.</given-names></name></person-group> (<year>2019</year>). <article-title>Recent developments towards enhancing process safety: inherent safety and cognitive engineering</article-title>. <source>Comput. Chem. Eng.</source> <volume>128</volume>, <fpage>364</fpage>&#x2013;<lpage>383</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.compchemeng.2019.05.034</pub-id></citation>
</ref>
<ref id="ref43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stirling</surname> <given-names>L.</given-names></name> <name><surname>Siu</surname> <given-names>C. H.</given-names></name> <name><surname>Jones</surname> <given-names>E. D.</given-names></name> <name><surname>Duda</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>Human factors considerations for enabling functional use of exosystems in operational environments</article-title>. <source>IEEE Syst. J.</source> <volume>13</volume>, <fpage>1072</fpage>&#x2013;<lpage>1083</lpage>. doi: <pub-id pub-id-type="doi">10.1109/JSYST.2018.2821689</pub-id></citation>
</ref>
<ref id="ref44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>R. F.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Yu</surname> <given-names>H. T.</given-names></name> <name><surname>Wei</surname> <given-names>X.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Deng</surname> <given-names>B.</given-names></name></person-group> (<year>2015</year>). <article-title>Power spectral density and coherence analysis of Alzheimer&#x2019;s EEG</article-title>. <source>Cogn. Neurodyn.</source> <volume>9</volume>, <fpage>291</fpage>&#x2013;<lpage>304</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s11571-014-9325-x</pub-id>, PMID: <pub-id pub-id-type="pmid">25972978</pub-id></citation>
</ref>
<ref id="ref45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Weng</surname> <given-names>J.</given-names></name> <name><surname>Yang</surname> <given-names>D.</given-names></name></person-group> (<year>2015</year>). <article-title>Investigation of shipping accident injury severity and mortality</article-title>. <source>Accid. Anal. Prev.</source> <volume>76</volume>, <fpage>92</fpage>&#x2013;<lpage>101</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.aap.2015.01.002</pub-id>, PMID: <pub-id pub-id-type="pmid">25617776</pub-id></citation>
</ref>
<ref id="ref46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wild</surname> <given-names>R. J.</given-names></name></person-group> (<year>2011</year>). <article-title>The paradigm and the paradox of perfect pilotage</article-title>. <source>J. Navig.</source> <volume>64</volume>, <fpage>183</fpage>&#x2013;<lpage>191</lpage>. doi: <pub-id pub-id-type="doi">10.1017/S0373463310000366</pub-id></citation>
</ref>
<ref id="ref47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xiong</surname> <given-names>P.</given-names></name> <name><surname>Zhang</surname> <given-names>B.</given-names></name> <name><surname>Zhang</surname> <given-names>J. S.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Liu</surname> <given-names>M.</given-names></name> <name><surname>du</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Multi-grained cascade forest model for automatic CAD characterization on ECG segments</article-title>. <source>Displays</source> <volume>70</volume>, <fpage>102070</fpage>&#x2013;<lpage>102076</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.displa.2021.102070</pub-id></citation>
</ref>
<ref id="ref48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Zhang</surname> <given-names>H.</given-names></name> <name><surname>Zhang</surname> <given-names>S.</given-names></name> <name><surname>Han</surname> <given-names>X.</given-names></name> <name><surname>Gao</surname> <given-names>S.</given-names></name> <name><surname>Gao</surname> <given-names>X.</given-names></name></person-group> (<year>2020</year>). <article-title>The spatio-temporal equalization for evoked or event-related potential detection in multichannel EEG data</article-title>. <source>IEEE Trans. Biomed. Eng.</source> <volume>67</volume>, <fpage>2397</fpage>&#x2013;<lpage>2414</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TBME.2019.2961743</pub-id>, PMID: <pub-id pub-id-type="pmid">31870977</pub-id></citation>
</ref>
<ref id="ref49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zahabi</surname> <given-names>M.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Shahrampour</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Classification of officers&#x2019; driving situations based on eye-tracking and driver performance measures</article-title>. <source>IEEE Trans. Hum. Mach. Syst.</source> <volume>51</volume>, <fpage>394</fpage>&#x2013;<lpage>402</lpage>. doi: <pub-id pub-id-type="doi">10.1109/THMS.2021.3090787</pub-id></citation>
</ref>
<ref id="ref50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>T.</given-names></name> <name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Liang</surname> <given-names>N. D.</given-names></name> <name><surname>Pitts</surname> <given-names>B. J.</given-names></name> <name><surname>Prakah-Asante</surname> <given-names>K. O.</given-names></name> <name><surname>Curry</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Physiological measurements of situation awareness: a systematic review</article-title>. <source>Hum. Factors</source> <volume>11</volume>:<fpage>18720820969071</fpage>. doi: <pub-id pub-id-type="doi">10.1177/0018720820969071</pub-id></citation>
</ref>
<ref id="ref51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>M.</given-names></name> <name><surname>Chen</surname> <given-names>J. F.</given-names></name> <name><surname>Li</surname> <given-names>H. B.</given-names></name> <name><surname>Liang</surname> <given-names>F.</given-names></name> <name><surname>Han</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name></person-group> (<year>2021</year>). <article-title>Vehicle driver drowsiness detection method using wearable EEG based on convolution neural network</article-title>. <source>Neural Comput. Applic.</source> <volume>33</volume>, <fpage>13965</fpage>&#x2013;<lpage>13980</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00521-021-06038-y</pub-id></citation>
</ref>
</ref-list>
</back>
</article>