<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2022.985887</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Application of virtual simulation situational model in Russian spatial preposition teaching</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Gao</surname> <given-names>Yanrong</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1897955/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Kassymova</surname> <given-names>R. T.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Luo</surname> <given-names>Yong</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Faculty of Philology and World Languages, Al-Farabi Kazakh Nation University</institution>, <addr-line>Almaty</addr-line>, <country>Kazakhstan</country></aff>
<aff id="aff2"><sup>2</sup><institution>Euro-Language&#x00027;s College, Zhejiang Yuexiu University</institution>, <addr-line>Shaoxing</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>Network and Educational Technology Center, Zhejiang Yuexiu University</institution>, <addr-line>Shaoxing</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Zhihan Lv, Uppsala University, Sweden</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Ziyang Han, Shenyang Jianzhu University, China; Dongyuan Ge, Guangxi University of Science and Technology, China; Yitong Niu, Sofia University, United States</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Yanrong Gao  <email>20132080&#x00040;zyufl.edu.cn</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Human-Media Interaction, a section of the journal Frontiers in Psychology</p></fn></author-notes>
<pub-date pub-type="epub">
<day>16</day>
<month>09</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>13</volume>
<elocation-id>985887</elocation-id>
<history>
<date date-type="received">
<day>04</day>
<month>07</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>22</day>
<month>07</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 Gao, Kassymova and Luo.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Gao, Kassymova and Luo</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract>
<p>The purpose is to improve the teaching quality of Russian spatial prepositions in colleges. This work takes teaching Russian spatial prepositions as an example to study the key technologies in 3D Virtual Simulation (VS) teaching. 3D VS situational teaching is a high-end visual teaching technology. VS situation construction focuses on Human-Computer Interaction (HCI) to explore and present a realistic language teaching scene. Here, the Steady State Visual Evoked Potential (SSVEP) is used to control Brain-Computer Interface (BCI). An SSVEP-BCI system is constructed through the Hybrid Frequency-Phase Modulation (HFPM). The acquisition system can obtain the current SSVEP from the user&#x00027;s brain to know which module the user is watching to complete instructions encoded by the module. Experiments show that the recognition accuracy of the proposed SSVEP-BCI system based on HFPM increases with data length. When the data length is 0.6-s, the Information Transfer Rate (ITR) reaches the highest: 242.21 &#x000B1; 46.88 bits/min. Therefore, a high-speed BCI character input system based on SSVEP is designed using HFPM. The main contribution of this work is to build a SSVEP-BCI system based on joint frequency phase modulation. It is better than the currently-known brain computer interface character input system, and is of great value to optimize the performance of the virtual simulation situation system for Russian spatial preposition teaching.</p></abstract>
<kwd-group>
<kwd>virtual simulation</kwd>
<kwd>situation simulation</kwd>
<kwd>Russian teaching</kwd>
<kwd>spatial preposition</kwd>
<kwd>SSVEP-BCI</kwd>
</kwd-group>
<counts>
<fig-count count="8"/>
<table-count count="2"/>
<equation-count count="8"/>
<ref-count count="43"/>
<page-count count="13"/>
<word-count count="6921"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>The earliest Russian Education in China began during the rule of Emperor Kangxi of the Qing Dynasty (1636&#x02013;1912) and has a history of more than 300 years. Russian education and teaching were once brilliant throughout the history of China. Due to the changes in international relations, the economic recession in Russia, and the sharp decline in China-Russia trade, Russian students in Chinese colleges have decreased significantly. Russian education is also declining. There are many reasons for the fallback of school Russian education, mainly because of students&#x00027; low interest in Russian learning (Khashimova et al., <xref ref-type="bibr" rid="B22">2021</xref>; Panova et al., <xref ref-type="bibr" rid="B29">2021</xref>; Vitalyevna, <xref ref-type="bibr" rid="B36">2021</xref>). In China, English teaching has been popularized in primary schools. Thus, a sudden change from English to Russian learning in junior high school generates negative feelings among students. Secondly, some students are forced to learn Russian because of their parents&#x00027; and teachers&#x00027; decisions. Since it is against their own will, developing good Russian learning habits is difficult. Thirdly, some teachers lack a standard Russian pronunciation and adopt the traditional cramming mode, which is also difficult to mobilize students&#x00027; learning enthusiasm. Therefore, teachers are responsible for cultivating students&#x00027; interest in learning Russian (Li et al., <xref ref-type="bibr" rid="B25">2021</xref>; Markova and Kvapil, <xref ref-type="bibr" rid="B27">2021</xref>; Shaby et al., <xref ref-type="bibr" rid="B32">2021</xref>).</p>
<p>Three-Dimensional (3D) virtual scene technology is an advanced visualization technology. It enables students to learn in an environment similar to the actual environment (Xiao-Dong and Hong-Hui, <xref ref-type="bibr" rid="B40">2020</xref>; Zhao et al., <xref ref-type="bibr" rid="B42">2020</xref>). All real-time data of &#x0201C;physical space&#x0201D; are collected through sensors, and display and more analysis, simulation, drilling, training and monitoring functions are realized in the &#x0201C;virtual presentation&#x0201D; environment with three dimensions. In this way , the production simulation in the virtual environment can be seamlessly integrated with the production in the reality. The virtual situation-based scene creation can open up a new Virtual Reality (VR) learning environment for language teaching. Also, it breaks through the application of Information Technology (IT) in teaching. The 3D virtual situation practical teaching system has changed the disadvantages of the traditional teaching model. It has brought students to the Russian world situation to feel the stimulation of foreign languages more intuitively. In addition, the 3D virtual situational practical teaching 6A (activity) system can solve the problem that students&#x00027; language learning cannot be applied to a certain extent. The combination and cooperation of 3D technology and foreign language teaching is a brand-new teaching model, which has the internal power to change the learning environment and broad application space (Huang et al., <xref ref-type="bibr" rid="B17">2018</xref>; Li et al., <xref ref-type="bibr" rid="B24">2020</xref>; Luo, <xref ref-type="bibr" rid="B26">2022</xref>). In the Virtual Simulation (VS) teaching situation, the combination of Brain-Computer Interface (BCI) and Artificial Intelligence (AI) can dynamically adjust educational tasks according to the personalized characteristics of subjects and their brain activities. This allows the education system to balance productivity with fatigue and boredom (Auccahuasi, <xref ref-type="bibr" rid="B7">2021</xref>; Balderas et al., <xref ref-type="bibr" rid="B8">2021</xref>; Wang, <xref ref-type="bibr" rid="B39">2022</xref>). At the same time, the additional combination with VR is expected to provide users with an appropriate environment and expand the user experience (Papanastasiou et al., <xref ref-type="bibr" rid="B30">2020</xref>).</p>
<p>Therefore, there is no case of using 3D virtual situation technology in Russian teaching in Russian or Chinese colleges. Let alone relevant research on the innovative Russian education environment. With regard to the problem of situation-based VS in Russian teaching, this work introduces the theory of situation perception and proposes a situation-driven Augmented Reality BCI (AR-BCI) interactive fusion system. BCI converts the electrophysiological signals of the central nervous system into messages and instructions and has an impact on the outside world. Thus, it realizes users&#x00027; wishes similar to conventional neuromuscular channels. The innovation is that by designing an interactive interface module with machine context perception and an interactive means interface driven by human brain context cognition, the advantages of machine autonomous intelligence and human brain cognitive decision-making are fully exploited. Further, this work uses the Steady-State Visual Evoked Potential (SSVEP) to control the BCI and constructs an SSVEP-BCI system through the Hybrid Frequency Phase Modulation (HFPM) method. It is expected to construct a high-speed BCI character input system to promote the practice of BCI and its application in virtual situational teaching.</p>
<sec>
<title>Related work</title>
<p>Context is crucial for language teaching, but traditional language classroom teaching cannot provide a language context. Thus, it has certain limitations. Situation simulation teaching theory advocates that teaching activities should be carried out by constructing real situations. With the mutual penetration of VS technology and experimental teaching, virtual classroom, virtual simulation training base, and virtual training have been realized. They help promote systematic and standard development and enrich the equipment of VS experimental teaching. Also, they popularize immersive, experiential, and situational learning. Angelini and Mu&#x000F1;iz (<xref ref-type="bibr" rid="B4">2021</xref>) set up different scenes of language training and used virtual technology to restore real situations, such as speeches and interviews. Their findings offered learners scenes of communication and interaction. Learners got familiar with the environment and adapted to interference, thus overcoming anxiety and psychological fear. The proposed method gave language communication real meaning and trained, cultivated, and improved students&#x00027; oral ability and level. Kamhi-Stein et al. (<xref ref-type="bibr" rid="B19">2020</xref>) investigated using the hybrid reality simulation platform Mursion in language teaching projects. The results showed that the simulation reality method could reflect the teaching progress of the real classroom and enhance students&#x00027; learning interests.</p>
<p>In constructing VS scenarios, through the implantable BCI, high-throughput neural signals can be directly obtained from the brain tissue. The information interaction channel between the central nervous system and the external physical world can be established. At present, many new and interesting applications have been produced. Implantable BCI is a key enabling technology to realize the integration of brain and machine. It is expected to realize the deep integration of biological intelligence and machine intelligence by establishing the direct interaction between brain and machine (Gao et al., <xref ref-type="bibr" rid="B15">2021</xref>). Cattan et al. (<xref ref-type="bibr" rid="B10">2020</xref>) integrated the BCI based on P300 into the VR environment. The research achieved a high transmission rate and improved the user&#x00027;s game experience in the virtual environment. Shin et al. (<xref ref-type="bibr" rid="B33">2022</xref>) argued that information science should collect and feedback information in the brain and the system. Material science needed a new carrier for information transmission. Psychological science also needed to make bold assumptions and explore the expression of nerve, direct sense, and subconsciousness; Finally, a closed-loop BCI was formed by fusing multiple disciplines.</p>
<p>The above studies imply that integrating VS technology and foreign language experimental teaching is the general trend and is bound to make great achievements in future teaching practice. In the process of constructing the VS situation, it is necessary to further explore the Human-Computer Interaction (HCI) to present a more realistic language teaching scene. This work will focus on the specific aspects of Russian teaching and discuss the specific application of VS scenes in detail.</p>
</sec>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and methods</title>
<sec>
<title>Semantic correspondence of chinese and russian spatial prepositions and comparison of phrase syntax</title>
<p>The main purpose of Russian undergraduate teaching is to enable Russian Majors to use Russian as an auxiliary means to obtain the professional materials and provide necessary professional knowledge reserves for the future specialized applications of scientific and technical Russian and professional Russian, including language knowledge and language application ability. Russian course is an important basis for professional Russian courses to cultivate professional and technical Russian talents (Keihani et al., <xref ref-type="bibr" rid="B21">2018</xref>). Optimizing the teaching quality of College Russian courses will guarantee high-quality professional Russian courses.</p>
<p>In Russian, the grammatical relationship between words and the grammatical function of words in sentences are mainly expressed through morphological changes. Russian is an Indo-European language with the most ancient morphological changes. Most nouns have 12 forms, and the singular and plural have six cases, respectively. Adjectives have more than 20 or even more than 30 forms. Moreover, singular masculine, neutral, feminine, and plural have six cases each, with short tails and comparative degrees. There can be one or two hundred verb forms, including aspect, tense, state, form, adjective, and adverb. Notional words can generally be divided into stem and suffix. The stem indicates the lexical meaning of a word. The ending of a word indicates grammatical meaning. Usually, an ending contains several grammatical meanings. There are many similarities between Russian and Chinese prepositions in grammatical meaning and syntactic function. Grammatically, Russian prepositions are equivalent to prepositions in Chinese but are expressed by different terms in the two languages (Cattan et al., <xref ref-type="bibr" rid="B10">2020</xref>; Shin et al., <xref ref-type="bibr" rid="B33">2022</xref>). Russian prepositions far outnumber Chinese prepositions. While analyzing the corresponding relationship between Chinese spatial prepositions and Russian spatial prepositions, researchers find that each Russian spatial preposition has rich ideographic meanings. For example, &#x0201C;B &#x0002B; the six case noun&#x0201D; can mean &#x0201C;in...&#x0201D;, and &#x0201C;Ha &#x0002B; the six case noun&#x0201D; can mean &#x0201C;on...&#x0201D;. By comparison, the Chinese have unique characteristics. In Chinese sentences, a preposition structure of &#x0201C;preposition &#x0002B; noun &#x0002B; Locatives&#x0201D; is considered a complete meaning. A single Chinese Preposition &#x0201C;&#x05728;&#x0201D; cannot express a complete meaning. Russian prepositions are much more than Chinese prepositions and have much richer meanings. Thus, a Chinese preposition can be translated into multiple Russian prepositions (Baykalova et al., <xref ref-type="bibr" rid="B9">2018</xref>; Galkina and Alexandra, <xref ref-type="bibr" rid="B14">2019</xref>; Unlu, <xref ref-type="bibr" rid="B35">2019</xref>). This one-to-many relationship is a difficult point for Chinese speakers to learn Russian. Learners can overcome the selection obstacles by finding similarities and differences using comparison. According to research, the prepositions such as &#x0201C;"&#x05728;,&#x0201D; &#x0201C;&#x05411;,&#x0201D; and &#x0201C;&#x04ECE;&#x0201D; are used more frequently in Chinese spatial prepositions. In addition, prepositions such as &#x0201C;&#x06CBF;&#x07740;&#x0201D; and &#x0201C;&#x08FCE;&#x07740;&#x0201D; can find their grammatical equivalents in Russian, with multiple choices. Russian prepositions, such as &#x0201C;B, Ha, and y,&#x0201D; which express the spatial meaning of &#x0201C;in,&#x0201D; &#x0201C;on,&#x0201D; and &#x0201C;besides&#x0201D; in Chinese, can all be uniformly categorized into the Chinese preposition &#x0201C;&#x05728;&#x0201D; structure. The corresponding relationship between the meaning of preposition &#x0201C;&#x05728;&#x0201D; structure in Chinese and Russian prepositions is shown in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>A comparison of the corresponding relationship between the Chinese preposition &#x0201C;&#x05728;&#x0201D; structure to Russian propositions.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>&#x0201D;&#x05728;&#x0201C; structure in Chinese sentences</bold></th>
<th valign="top" align="left"><bold>Russian preposition &#x0002B; required case</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x091CC;  (in something)</td>
<td valign="top" align="left">B&#x0002B;6, &#x00432;&#x0043D;y&#x00442;&#x00440;&#x00438;&#x0002B;2</td>
</tr>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x04E0A;  (on something)</td>
<td valign="top" align="left">ha, &#x0043F;&#x0043E;&#x00434;&#x0002B;5</td>
</tr>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x04E0B;  (under something)</td>
<td valign="top" align="left">&#x0043F;&#x00434;&#x0002B;5</td>
</tr>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x09644;&#x08FD1;  (near something)</td>
<td valign="top" align="left">&#x0043E;&#x0043A;&#x0043E;&#x0043B;&#x0043E;&#x0002B;2</td>
</tr>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x04E2D;&#x095F4;  (in the middle of something)</td>
<td valign="top" align="left">&#x00441;&#x00440;&#x00435;&#x00434;&#x00438;&#x0002B;2</td>
</tr>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x0524D;&#x09762;  (in front of something)</td>
<td valign="top" align="left">&#x0043F;&#x00435;&#x00440;&#x00435;&#x00434;&#x0002B;5</td>
</tr>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x067D0;&#x04EBA;&#x090A3;&#x091CC; (in somebody&#x00027;s place)</td>
<td valign="top" align="left">y&#x0002B;2</td>
</tr>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x0540E;&#x09762;  (in the back of something)</td>
<td valign="top" align="left">a&#x0002B;5</td>
</tr>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x04E24;&#x08005;&#x04E4B;&#x095F4;  (between A and B)</td>
<td valign="top" align="left">&#x0043C;&#x00435;&#x00436;&#x00434;&#x00443;&#x0002B;5</td>
</tr>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x05185;&#x090E8;  (within something)</td>
<td valign="top" align="left">&#x00432;&#x0043D;y&#x00442;&#x00440;&#x00438;&#x0002B;2</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Similarly, Russian prepositions like &#x0201C;K, BHyTp&#x00438;, &#x003A0;o&#x00434;&#x0201D; that express the spatial meaning of &#x0201C;&#x05411;/&#x05F80;/&#x0671D;&#x02026;&#x02026;&#x053BB;&#x0201D; (for/to/toward something), &#x0201C;&#x05411;/&#x05F80;/&#x0671D;&#x02026;&#x02026;&#x091CC;&#x0201D; (into something), and &#x0201C;&#x05411;/&#x05F80;/&#x0671D;&#x02026;&#x02026;&#x04E0B;&#x0201D; (downward) in Chinese are uniformly classified into the Chinese preposition &#x0201C;&#x05411;&#x0201D; structure. <xref ref-type="table" rid="T2">Table 2</xref> shows the corresponding relationship between the preposition &#x0201C;&#x05411;&#x0201D; structure in Chinese and multiple Russian prepositions.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>A comparison of the corresponding relationship between the Chinese preposition &#x0201C;&#x05411;&#x0201D; structure and Russian prepositions.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Chinese &#x0201C;&#x05411;&#x0201D; preposition structure</bold></th>
<th valign="top" align="left"><bold>Russian preposition &#x0002B; required case</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">&#x05411;&#x02026;&#x02026;&#x091CC;  (into)</td>
<td valign="top" align="left">B&#x0002B;4</td>
</tr>
<tr>
<td valign="top" align="left">&#x05411;&#x02026;&#x02026;&#x053BB;&#x03001;&#x05411;&#x02026;&#x02026;&#x04E0A;&#x09762;  (toward/upward)</td>
<td valign="top" align="left">Ha&#x0002B;4</td>
</tr>
<tr>
<td valign="top" align="left">&#x05411;&#x02026;&#x02026;&#x04E0B; (downward)</td>
<td valign="top" align="left">&#x0043F;&#x0043E;&#x00434;&#x0002B;5</td>
</tr>
<tr>
<td valign="top" align="left">&#x05728;&#x02026;&#x02026;&#x090A3;&#x08FB9; (over)</td>
<td valign="top" align="left">sa&#x0002B;4</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Russian words must change to a specific case before being combined with a prepositional to express a specific grammatical meaning. In Chinese &#x0201C;noun Locatives&#x0201D; prepositional phrases, nouns do not need to change cases. The same Russian spatial prepositions followed by different cases can express different grammatical meanings. For example, &#x0201C;B&#x0201D; &#x0002B; the fourth case indicates a direction, and &#x0201C;B&#x0201D; &#x0002B; the sixth case means the place. In contrast, the grammatical meaning of Chinese spatial prepositions is relatively stable. There are subtle differences between Chinese and Russian in using prepositions with similar meanings. Using similar but different examples to set up situations, foreign language learners can master spatial prepositions faster.</p>
</sec>
<sec>
<title>Russian audio-visual oral teaching under virtual simulation</title>
<p>The 3D virtual situational teaching system can record and live broadcast the synthetic video in real-time for other teachers and students to observe and comment. The system will release the recorded synthetic courses through the teaching resource management system (Hsu et al., <xref ref-type="bibr" rid="B16">2021</xref>). Students can copy them to their computers for repeated speculation and learning to improve their practical training ability. The potential of 3D virtual situation training teaching applications needs to be tapped and effectively opened (Chahine and Uetova, <xref ref-type="bibr" rid="B11">2021</xref>; Zoda, <xref ref-type="bibr" rid="B43">2022</xref>). Language learning needs continuous practice in a specific context. Learning in a rich language environment can significantly improve students&#x00027; learning efficiency. 3D virtual situational training and teaching can effectively help students solve problems, improve the learning environment, and enrich teaching forms (Almousa et al., <xref ref-type="bibr" rid="B3">2019</xref>; Ahir et al., <xref ref-type="bibr" rid="B1">2020</xref>; Philippe et al., <xref ref-type="bibr" rid="B31">2020</xref>).</p>
<p>As a symbol of thinking and communication, language is an organic combination of pronunciation, grammar, and semantics produced in a given context. Virtual Reality (VR) technology can provide the necessary environment for language learning; its connection with Russian teaching is mainly reflected in vocabulary and grammar teaching. People can improve vocabulary accumulation by memorizing Russian words through visual senses. The course teaching must be done in the 3D virtual recording and broadcasting room. The structure of the 3D virtual recording and the broadcasting room is shown in <xref ref-type="fig" rid="F1">Figure 1</xref>. Teachers use simulation technology to create a vivid virtual language learning scene and form a complete 3D virtual activity. The synthesized video will eventually be delivered to students&#x00027; mobile phones and tablet terminals. Teachers use simulation technology to create a vivid virtual language learning scene to form a complete 3D virtual activity. The synthesized video will eventually be delivered to students&#x00027; mobile phones and tablet terminals. 3D technology can help recognize the problem of &#x0201C;stating direction, orientation, and path&#x0201D; in Russian in a virtual scene. Learners can understand the teaching difficulty of Russian motion verbs with different prefixes through the function of hearing and vision.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Structure of 3D virtual recording and broadcasting room for Russian teaching.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-985887-g0001.tif"/>
</fig>
</sec>
<sec>
<title>Virtual situation AR-BCI system</title>
<p>A complete BCI process includes four steps: signal acquisition, information decoding and processing, signal output/execution, and feedback (Wang P. et al., <xref ref-type="bibr" rid="B38">2018</xref>). BCI can collect or feedback signals through electricity, magnetism, light, and sound. Electroencephalogram (EEG) technology is a mainstream exploration direction. There are many ways to collect central nerve signals to monitor brain activity, including EEG, functional Near-infrared Spectroscopy (fNIRS), and functional Magnetic Resonance Imaging (fMRI). Feedback techniques also include electricity, magnetism, sound, and light. BCI is a brain signal detection technology. It decodes a specific brain thinking activity and converts it into a command signal that computers and other devices can understand. This signal is then output to drive wearable devices on tissues and organs to act according to brain ideas. The key link is to correctly analyze the sensory behavior signals of the brain (Jensen and Konradsen, <xref ref-type="bibr" rid="B18">2018</xref>; Ke et al., <xref ref-type="bibr" rid="B20">2020</xref>; Arpaia et al., <xref ref-type="bibr" rid="B6">2021</xref>). BCI technical workflow is shown in <xref ref-type="fig" rid="F2">Figure 2</xref>. A variety of brain signals (such as EEG, magnetoencephalogram, functional magnetic resonance, functional near infrared spectroscopy, cortical EEG and local field potential, etc.) designed in the signal acquisition process can be used as signals of brain computer interface. The next signal processing is the core work of brain computer interface, which decodes human intention by analyzing and processing signals. The signal processing of BCI includes preprocessing, feature extraction, and pattern classification. After feature extraction, a classifier should be established to classify the features.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>BCI technical workflow.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-985887-g0002.tif"/>
</fig>
<p>EEG acquisition is the critical step of BCI. The effect of acquisition, signal strength, stability, and bandwidth directly determine the subsequent processing and output. Changes in the membrane potential of central neurons in the brain will produce spikes or action potentials. The ion movement transmitted between synapses of nerve cells will form field potentials. These neurophysiological signals can be collected and amplified by external connection or implantation of microelectrodes in the cerebral cortex motor nerves. The subjects wore electrode caps and used conductive paste to increase the conductivity between the cerebral cortex and electrodes. The international 10&#x02013;20 standard electrode leads are shown in <xref ref-type="fig" rid="F3">Figure 3</xref>. Take 8 electrodes on the left and right sides, respectively, the midpoint of the forehead (Fz), central point (CZ), vertex (Pz), and two ear electrodes on the anterior and posterior positions, a total of 21 electrodes. Brain activities are transformed into electrical signals through signal processing. It removes interference waves and other signals, classifies and processes targets, and converts them into corresponding signals that can be output. Signal output transmits the collected and processed EEG signals to the connected equipment or feeds them back to the terminal machine as instructions (Kim et al., <xref ref-type="bibr" rid="B23">2021</xref>; Zhang et al., <xref ref-type="bibr" rid="B41">2022</xref>). The equipment generates actions or displays contents once the signal is executed. The participants will feel that the brain waves generated in the first step have been executed through vision, touch, or hearing and trigger the feedback signal.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>International 10&#x02013;20 standard electrode lead diagram.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-985887-g0003.tif"/>
</fig>
<p>SSVEP is an EEG signal based on frequency domain features. Therefore, this work uses Canonical Correlation Analysis (CCA) to extract the EEG signal features. When a constant frequency external visual stimulation is applied, the neural network consistent with the stimulation frequency or harmonic frequency will produce resonance. It will lead to significant changes in brain potential activity at the stimulation frequency or harmonic frequency, resulting in SSVEP signals. SSVEP signal is manifested in EEG signal, and the spectral peak can appear on the stimulation frequency or harmonic in the power spectrum. By analyzing the frequency corresponding to the peak of the detection spectrum, the stimulus source of the visual gaze of the subject can be detected to identify the intention of the subject.</p>
<p>Suppose the number of frequency stimuli is <italic>M</italic>; <italic>X</italic> is the collected multi-channel EEG signal; <italic>Y</italic> is the sine and cosine reference signal corresponding to the frequency of visual stimuli. Its structure can be expressed as:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mrow><mml:msub><mml:mi>Y</mml:mi><mml:mi>m</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mrow><mml:mo>[</mml:mo> <mml:mrow><mml:mtable columnalign='left'><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mi>sin</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x003C0;</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>m</mml:mi></mml:msub><mml:mi>t</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mi>cos</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x003C0;</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>m</mml:mi></mml:msub><mml:mi>t</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mn>...</mml:mn></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mi>sin</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x003C0;</mml:mo><mml:mi>H</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mi>m</mml:mi></mml:msub><mml:mi>t</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr><mml:mtr columnalign='left'><mml:mtd columnalign='left'><mml:mrow><mml:mi>cos</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x003C0;</mml:mo><mml:mi>H</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mi>m</mml:mi></mml:msub><mml:mi>t</mml:mi></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow> <mml:mo>]</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>F</mml:mi></mml:mfrac><mml:mo>,</mml:mo><mml:mfrac><mml:mn>2</mml:mn><mml:mi>F</mml:mi></mml:mfrac><mml:mo>...</mml:mo><mml:mfrac><mml:mi>P</mml:mi><mml:mi>F</mml:mi></mml:mfrac></mml:mrow></mml:math></disp-formula>
<p>In Equation (1) <italic>f</italic><sub><italic>m</italic></sub>, <italic>H</italic>, <italic>F</italic>, and <italic>P</italic> refer to the stimulation frequency, the number of harmonics, the sampling rate, and the number of signal samples.</p>
<p>Vectors <italic>Wx</italic> and <italic>Wy</italic> can maximize the correlation between vectors <italic>x</italic> and <italic>y</italic>. The typical correlation coefficient &#x003C1; of <italic>X</italic> and can be expressed as:</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>&#x003C1;</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mo class="qopname">max</mml:mo></mml:mrow><mml:mrow><mml:mi>W</mml:mi><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>W</mml:mi><mml:mi>y</mml:mi></mml:mrow></mml:munder></mml:mstyle><mml:mfrac><mml:mrow><mml:mi>E</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:msup><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msqrt><mml:mrow><mml:mi>E</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:msup><mml:mrow><mml:mi>x</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>]</mml:mo></mml:mrow><mml:mi>E</mml:mi><mml:mrow><mml:mo>[</mml:mo><mml:mrow><mml:mi>y</mml:mi><mml:msup><mml:mrow><mml:mi>y</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup></mml:mrow><mml:mo>]</mml:mo></mml:mrow></mml:mrow></mml:msqrt></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>Further, the visual stimulation frequency <inline-formula><mml:math id="M3"><mml:mover accent="true"><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover></mml:math></inline-formula> of SSVEP can be expressed as:</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M4"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mover accent="true"><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mo>^</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munder><mml:mrow><mml:mo class="qopname">arg</mml:mo><mml:mo class="qopname">max</mml:mo></mml:mrow><mml:mrow><mml:msub><mml:mrow><mml:mi>f</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:munder></mml:mstyle><mml:msub><mml:mrow><mml:mi>&#x003C1;</mml:mi></mml:mrow><mml:mrow><mml:mi>m</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>The basic principle of HoloLens is the near-eye 3D diffraction information display technology. It projects the obtained virtual content from the front micro projector to the photoconductive lens and then into the human eye (Chen et al., <xref ref-type="bibr" rid="B13">2020</xref>; Apicella et al., <xref ref-type="bibr" rid="B5">2022</xref>). HoloLens can carry out <italic>XYZ</italic>-three-axis modeling of the surrounding space and recognize the user&#x00027;s gestures through multiple cameras and sensors. The modeling operation of this 3D coordinate axis also makes it possible for multiple HoloLLens to share virtual objects and interact with each other. That is, user B can see the virtual scenery generated by user A.</p>
<p>Situational awareness refers to the operator&#x00027;s perception and understanding of the current system environment and predicting future system situational changes. Endsley constructed a three-level theoretical model of situational awareness: perception, understanding, and prediction. The situation-based Human-Computer Interaction (HCCI) system is shown in <xref ref-type="fig" rid="F4">Figure 4</xref>. The system includes situational information collection and processing, and application services. The purpose is to present the data processing results to users as information and to produce user cognition to make decisions or behavioral responses. Situational cognition emphasizes that users generate decisions and judgments about tasks or systems in this context based on situational awareness. Users complete the HCI and form the interactive feedback by outputting behaviors and executing actions through the machine.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Situation-based HCI system.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-985887-g0004.tif"/>
</fig>
</sec>
<sec>
<title>Frequency phase hybrid coding method in BCI</title>
<p>Control application is a major branch of BCI. It concerns reasonably designing and matching BCI control tasks. It ensures the efficiency of BCI and good HCI experience, the reliability and safety of the control system, and the coordination and unity of BCI and control equipment. Brain nerve cells will produce potential activity changes on different stimuli. These changes are most obvious in or near the primary visual cortex, namely, the occipital lobe. This change in EEG signals is called the Visual Evoked Potential (VEP) (Al Janabi et al., <xref ref-type="bibr" rid="B2">2020</xref>; Moro et al., <xref ref-type="bibr" rid="B28">2021</xref>). VEP is divided into transient VEP and Steady-State VEP (SSVEP) according to the characteristics of stimulus frequency. Due to individual differences, both the BCI system based on motor imagination and the P300-BCI system need to train the subjects for a long time. Thus, it greatly reduces the ease of use of the BCI system. By comparison, the SSVEP-BCI system needs less or no training and has strong adaptability.</p>
<p>When a constant-frequency external visual stimulus is applied, the Neural Network (NN) consistent with the stimulus frequency or harmonic frequency will produce resonance. This results in significant changes in brain potential activity at the stimulus frequency or harmonic frequency, thus generating SSVEP signals.</p>
<p>Physiologically, each brain part has its division of labor. The sensory, motor, and cognitive modules of different cortical regions are independent, as shown in <xref ref-type="fig" rid="F4">Figure 4</xref>. However, functional modules cooperate with each other to form an organic whole. When the brain processes perceptual information, many modules work in parallel. BCI system based on SSVEP signal judges brain thinking activity by detecting EEG signal of occipital visual area. The acquisition system in SSVEP-BCI can obtain the current SSVEP from the user&#x00027;s brain and then know which module the user is currently watching. A crucial part of the SSVEP-BCI system is the stimulation module that induces SSVEP. The brain will generate SSVEP with different frequencies. Nowadays, there are three kinds of stimulators used to realize stimulation modules, namely Liquid Crystal Display (LCD), Cathode Ray Tube (CRT), and Light Emitting Diode (LED) (Wang M. et al., <xref ref-type="bibr" rid="B37">2018</xref>; Chai et al., <xref ref-type="bibr" rid="B12">2020</xref>; Thielen et al., <xref ref-type="bibr" rid="B34">2021</xref>).</p>
<p>The SSVEP-BCI system multi-frequency permutation coding includes the following steps. It places all available frequencies of the stimulator in the coding frequency set. Each stimulation module is periodically coded in a unique frequency arrangement. A coding cycle consists of two or more time segments. This work aims to construct a high-speed BCI character input system using a hybrid frequency-phase coding method. The idea of filter bank analysis is introduced into canonical correlation analysis, a filter bank canonical correlation analysis is proposed, and a 40-target brain computer interface character input system based on frequency coding is designed. A BCI character input system based on SSVEP is designed using hybrid frequency-phase coding. Frequency coding usually adopts an equal interval frequency coding target:</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M5"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>sin</mml:mi><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x003C0;</mml:mo><mml:mrow><mml:mo>[</mml:mo> <mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>&#x00394;</mml:mi><mml:mi>f</mml:mi></mml:mrow> <mml:mo>]</mml:mo></mml:mrow><mml:mi>t</mml:mi></mml:mrow> <mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>In Equation (4), <italic>f</italic><sub>0</sub> is the smallest frequency used. &#x025B5;f denotes the frequency interval. means the index of the target.</p>
<p>Introducing equally spaced phases into frequency coding can increase the difference in frequency coding targets:</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M6"><mml:mrow><mml:msub><mml:mi>x</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>t</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>sin</mml:mi><mml:mrow><mml:mo>{</mml:mo> <mml:mrow><mml:mn>2</mml:mn><mml:mi>&#x003C0;</mml:mi><mml:mrow><mml:mo>[</mml:mo> <mml:mrow><mml:msub><mml:mi>f</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>c</mml:mi><mml:mi>f</mml:mi></mml:mrow> <mml:mo>]</mml:mo></mml:mrow><mml:mi>t</mml:mi><mml:mo>+</mml:mo><mml:msub><mml:mi>&#x003D5;</mml:mi><mml:mn>0</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>n</mml:mi><mml:mo>&#x02212;</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>)</mml:mo></mml:mrow><mml:mi>&#x00394;</mml:mi><mml:mi>&#x003D5;</mml:mi></mml:mrow> <mml:mo>}</mml:mo></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>In Equation (5), &#x003D5;<sub>0</sub> and &#x025B5;&#x003D5; respectively represent the initial phase and phase interval of the target at the minimum frequency.</p>
<p>The zero phase segment data are further circularly shifted. The SSVEP with different phase interval values can be:</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M7"><mml:mrow><mml:mtable><mml:mtr><mml:mtd><mml:mrow><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x02322;</mml:mo></mml:mover><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mover accent='true'><mml:mi>&#x003D5;</mml:mi><mml:mo stretchy='true'>&#x000AF;</mml:mo></mml:mover></mml:mrow><mml:mi>k</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mover accent='true'><mml:mi>X</mml:mi><mml:mo>&#x000AF;</mml:mo></mml:mover><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mi>n</mml:mi><mml:mo>+</mml:mo><mml:mfrac><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x003C0;</mml:mo><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003D5;</mml:mi><mml:mi>k</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:mo>&#x003C0;</mml:mo><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mi>k</mml:mi></mml:msub></mml:mrow></mml:mfrac><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:mrow></mml:math></disp-formula>
<p>In the online simulation, the BCI system performance is evaluated using the Leave-One-Out Cross-Validation (LOO-CV). The Cross-Validation (CV) method in target recognition generates training reference signals from training data. Classification accuracy and Information Transfer Rate (ITR) indicate the BCI system performance. The amount of information output for each judgment is:</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M8"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>B</mml:mi><mml:mo>=</mml:mo><mml:msubsup><mml:mrow><mml:mo class="qopname">log</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:mi>p</mml:mi><mml:msubsup><mml:mrow><mml:mo class="qopname">log</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mrow><mml:mi>p</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x0002B;</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>&#x000D7;</mml:mo><mml:msub><mml:mrow><mml:mo class="qopname">log</mml:mo></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>p</mml:mi></mml:mrow><mml:mrow><mml:mi>N</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:mfrac></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In Equation (7), refers to the number of targets. <italic>p</italic> is the recognition accuracy. The ITR can be expressed as:</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M9"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>I</mml:mi><mml:mi>T</mml:mi><mml:mi>R</mml:mi><mml:mo>=</mml:mo><mml:mi>B</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mn>60</mml:mn><mml:mo>/</mml:mo><mml:mi>T</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>In Equation (8), <italic>T</italic> indicates the time required for each instruction output.</p>
<p>The experimental data are collected in the AR-SSVEP paradigm, which includes four groups. A total of 120 trials are collected from four stimulus targets in each group. The sampling rate of EEG data is 1 KHz, and the band-pass filter is 0.5&#x02013;100 Hz. Four stimulus layouts AR-Pos1&#x0007E;AR-Pos4 are set in AR-SSVEP. The horizontal interval of the same stimulus target in the adjacent layouts is 128.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>Performance results of VS situation AR-SSVEP</title>
<p>The optimal stimulus location layout is obtained through the recognition accuracy of HoloLens at different locations. According to the experimental results, the recognition accuracy and ITR of five subjects&#x00027; offline and simulated online SSVEP are analyzed. Under the four layouts of AR-SSVEP, the stimulation duration is 0.5&#x02013;4 s, and the step length is 0.5 s. The classification accuracy is calculated for four positions at different times; the results are shown in <xref ref-type="fig" rid="F5">Figure 5</xref>. Apparently, with the increase in data length, the classification accuracy of each subject gradually increases. The classification accuracy of the four layouts is different. Suppose the standard threshold of classification accuracy is 90%. When the time window length is 1, 2, and 3s, the number of subjects who reach the threshold in the AR-Pos2 layout is higher than in the other three positions.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Correlation coefficients of a single-trial SSVEP of a subject [<bold>(A)</bold> position 1; <bold>(B)</bold> position 2; <bold>(C)</bold> position 3; <bold>(D)</bold> position 4].</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-985887-g0005.tif"/>
</fig>
</sec>
<sec>
<title>Test results of mixed frequency-phase recognition method</title>
<p><xref ref-type="fig" rid="F6">Figure 6</xref> shows the correlation coefficients of a single-trial SSVEP of a subject. The numerical results are highly consistent with the theoretical pattern from the stimulus signal. Four different phase interval values lead to different phase patterns. Under different phase interval values, the correlation coefficients between 12.4 Hz and adjacent frequencies are significantly different. When the phase interval is , the maximum correlation coefficient is obtained at the target frequency (12.4 Hz, 0.7). This shows that the identification accuracy of SSVEP can be significantly improved by introducing Phase Modulation (PM) into the HFPM.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Correlation coefficients between SSVEP and adjacent frequency for a single test at 12.4 Hz [<bold>(A)</bold> &#x00394;&#x003D5; = 0; <bold>(B)</bold> &#x00394;&#x003D5; = 0.5&#x003C0;; <bold>(C)</bold> &#x00394;&#x003D5; = &#x003C0;; <bold>(D)</bold> &#x00394;&#x003D5; = 1.5&#x003C0;).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-985887-g0006.tif"/>
</fig>
<p>The presentation of stimulus signals plays a vital role in SSVEP-BCI. Next, 40 stimulus signals are generated by the sampling sine coding method. <xref ref-type="fig" rid="F7">Figure 7</xref> shows the time domain waveforms of stimulus signals corresponding to different phases at the same frequency (9 Hz) and the corresponding average SSVEP of a single subject. Obviously, the amplitude peaks of SSVEP induced by different phases at the same frequency are all at 9 Hz. Thus, the sampling sine coding method can induce robust SSVEP signals and accurately encode frequency information.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Mixed frequency-phase coding [<bold>(A)</bold> induced SSVEP under different stimulation phases; <bold>(B)</bold> time-domain waveform under different stimulation phases; <bold>(C)</bold> amplitude spectrum of induced SSVEP; <bold>(D)</bold> complex spectral scatter of induced SSVEP].</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-985887-g0007.tif"/>
</fig>
<p>Subsequently, the performance of the online BCI system with different data lengths is further studied. The system&#x00027;s classification accuracy and ITR under HFPM (40 class hours) are calculated. <xref ref-type="fig" rid="F8">Figure 8</xref> shows the performance of the simulated online BCI system under different data lengths in the mixed coding paradigm. The results in <xref ref-type="fig" rid="F8">Figure 8</xref> suggest that the classification accuracy of the system will increase with the increase of data length. The accuracy and ITR of joint coding paradigm also increase with the increase of data length. Moreover, the classification accuracy is still significantly higher than the opportunity level when the data length is short. The ITR reaches the highest when the data length is 0.6-s, reaching 242.21 &#x000B1; 46.88 bits/min. In the actual use, the target recognition time of the online BCI system should be optimized by comprehensively considering the accuracy and ITR.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>Performance of online BCI system under hybrid frequency-phase coding with different data lengths [<bold>(A)</bold> classification accuracy; <bold>(B)</bold> ITR].</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-985887-g0008.tif"/>
</fig>
</sec>
</sec>
<sec sec-type="conclusions" id="s4">
<title>Conclusion</title>
<p>Under the teaching picture of Unipus 3.0, teaching resources, teaching scenes, the teaching practice, and the interaction model between the core teaching elements have transformed the foreign language digital education from the local optimization to a comprehensive upgrade in the digital intelligence age. The promotion of VS experiment teaching 2.0 should strengthen the basic construction from three aspects: the accumulation of basic teaching materials, the exploration of experimental teaching rules, and the construction and development of a simulation system. At the same time, it should learn from intelligent VS to realize the innovative development of liberal arts experiment teaching and open up new ways in future teaching practice.</p>
<p>3D VS situational teaching is a high-end visual teaching technology in which the creation of virtual situations is the primary link. Considering that the language teaching of foreign languages needs a real context, and the 3D VS situation technology can realize this idea. This work chooses the Russian spatial preposition teaching as an example to study the key technologies of situation-based VS. In constructing VS scenes. A BCI machine can control the human brain. The machine can give feedback to the ideas of the human brain. Further, it carries out a systematic study of the AR-BCI system in virtual situations. The CCA method is used to extract the EEG features. Next, the SSVEP-BCI frequency recognition method is proposed, and the SSVEP-BCI system is designed and implemented. Finally, a novel hybrid frequency-phase coding method is proposed to improve the average ITR. The results show that the recognition accuracy of SSVEP can be significantly improved by introducing PM into the HFPM. This work still has some limitations. For example, it does not consider that different design characteristics in virtual situational teaching will bring different interface design effects, which will affect user cognition and operation performance. In the future, the design of interactive interface should combine visual perception and cognitive psychology, such as analyzing the display of graphics and text in the interactive interface, thus evaluating the impact of interface design performance on user performance.</p>
</sec>
<sec id="s5">
<title>Contributions</title>
<p>In order to improve the teaching quality of Russian spatial prepositions, this work constructs an SSVEP-BCI system based on HFPM and explores the 3D VS situational teaching system. The proposed system is superior to the currently known BCI character input system. It has important value for optimizing the performance of the VS situation system to improve the overall experience of Russian teaching.</p>
</sec>
<sec sec-type="data-availability" id="s6">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.</p>
</sec>
<sec id="s7">
<title>Ethics statement</title>
<p>Ethical approval for this study and written informed consent from the participants of the study were not required in accordance with local legislation and national guidelines.</p>
</sec>
<sec id="s8">
<title>Author contributions</title>
<p>All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p>
</sec>
<sec sec-type="funding-information" id="s9">
<title>Funding</title>
<p>This work was supported by Provincial first-class Curriculum Construction project Basic Russian 2 for undergraduate colleges and universities of Zhejiang Provincial Department of Education (Zhejiang Education Office Letter [2020] No. 77).</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahir</surname> <given-names>K.</given-names></name> <name><surname>Govani</surname> <given-names>K.</given-names></name> <name><surname>Gajera</surname> <given-names>R.</given-names></name> <name><surname>Shah</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Application on virtual reality for enhanced education learning, military training and sports</article-title>. <source>Augment. Hum. Res.</source> <volume>5</volume>, <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1007/s41133-019-0025-2</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Al Janabi</surname> <given-names>H. F.</given-names></name> <name><surname>Aydin</surname> <given-names>A.</given-names></name> <name><surname>Palaneer</surname> <given-names>S.</given-names></name> <name><surname>Macchione</surname> <given-names>N.</given-names></name> <name><surname>Al-Jabir</surname> <given-names>A.</given-names></name> <name><surname>Khan</surname> <given-names>M. S.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Effectiveness of the HoloLens mixed-reality headset in minimally invasive surgery: a simulation-based feasibility study</article-title>. <source>Surg. Endosc.</source> <volume>34</volume>, <fpage>1143</fpage>&#x02013;<lpage>1149</lpage>. <pub-id pub-id-type="doi">10.1007/s00464-019-06862-3</pub-id><pub-id pub-id-type="pmid">31214807</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Almousa</surname> <given-names>O.</given-names></name> <name><surname>Prates</surname> <given-names>J.</given-names></name> <name><surname>Yeslam</surname> <given-names>N.</given-names></name> <name><surname>Mac Gregor</surname> <given-names>D.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Phan</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Virtual reality simulation technology for cardiopulmonary resuscitation training: an innovative hybrid system with haptic feedback</article-title>. <source>Simul. Gaming</source> <volume>50</volume>, <fpage>6</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1177/1046878118820905</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Angelini</surname> <given-names>M. L.</given-names></name> <name><surname>Mu&#x000F1;iz</surname> <given-names>R.</given-names></name></person-group> (<year>2021</year>). <article-title>Simulation through virtual exchange in teacher training</article-title>. <source>Edutec. Revista Electr&#x000F3;nica De Tecnolog&#x000ED;a Educativa</source> <volume>75</volume>, <fpage>65</fpage>&#x02013;<lpage>89</lpage>. <pub-id pub-id-type="doi">10.21556/edutec.2021.75.1913</pub-id><pub-id pub-id-type="pmid">32988797</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Apicella</surname> <given-names>A.</given-names></name> <name><surname>Arpaia</surname> <given-names>P.</given-names></name> <name><surname>de Benedetto</surname> <given-names>E.</given-names></name> <name><surname>Donato</surname> <given-names>N.</given-names></name> <name><surname>Duraccio</surname> <given-names>L.</given-names></name> <name><surname>Giugliano</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Enhancement of SSVEPs classification in BCI-based wearable instrumentation through machine Learning Techniques</article-title>. <source>IEEE Sens. J.</source> <volume>22</volume>, <fpage>9087</fpage>&#x02013;<lpage>9094</lpage>. <pub-id pub-id-type="doi">10.1109/JSEN.2022.3161743</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Arpaia</surname> <given-names>P.</given-names></name> <name><surname>de Benedetto</surname> <given-names>E.</given-names></name> <name><surname>Duraccio</surname> <given-names>L</given-names></name></person-group>. (<year>2021</year>) <article-title>Design, implementation, metrological characterization of a wearable, integrated AR-BCI hands-free system for health 4.0 monitoring</article-title>. Measurement 177, 109280. 10.1016/j.measurement.2021.109280</citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Auccahuasi</surname> <given-names>W.</given-names></name></person-group> (<year>2021</year>) <article-title>Methodology for the evaluation of the levels of attention and meditation in the development of virtual online classes of mathematics, through the use of brain-computer interface</article-title>. Turkish J. Comput. Math. Educ. (TURCOMAT) <volume>12</volume>, <fpage>2703</fpage>&#x02013;<lpage>2708</lpage>. 10.17762/turcomat.v12i2.2295.</citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Balderas</surname> <given-names>D.</given-names></name> <name><surname>Ponce</surname> <given-names>P.</given-names></name> <name><surname>Lopez-Bernal</surname> <given-names>D.</given-names></name> <name><surname>Molina</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Education 4.0: teaching the basis of motor imagery classification algorithms for brain-computer interfaces</article-title>. <source>Future Internet</source> <volume>13</volume>, <fpage>202</fpage>. <pub-id pub-id-type="doi">10.3390/fi13080202</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baykalova</surname> <given-names>E. D.</given-names></name> <name><surname>Artyna</surname> <given-names>M. K.</given-names></name> <name><surname>Dorzhu</surname> <given-names>N. S.</given-names></name> <name><surname>Ochur</surname> <given-names>T. K.</given-names></name> <name><surname>Mongush</surname> <given-names>D. S.</given-names></name></person-group> (<year>2018</year>). <article-title>Morphological interference in the process of mastering English speech in conditions of interaction of Tuvan, Russian and English as a foreign language</article-title>. <source>Opci&#x000F3;n</source> <volume>34</volume>, <fpage>35</fpage>&#x02013;<lpage>60</lpage>. <pub-id pub-id-type="doi">10.30853/filnauki.2018-10-2.35</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cattan</surname> <given-names>G.</given-names></name> <name><surname>Andreev</surname> <given-names>A.</given-names></name> <name><surname>Visinoni</surname> <given-names>E.</given-names></name></person-group> (<year>2020</year>). <article-title>Recommendations for integrating a P300-based brain&#x02013;computer interface in virtual reality environments for gaming: an update</article-title>. <source>Computers</source> <volume>9</volume>, <fpage>92</fpage>. <pub-id pub-id-type="doi">10.3390/computers9040092</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chahine</surname> <given-names>I. K.</given-names></name> <name><surname>Uetova</surname> <given-names>E.</given-names></name></person-group> (<year>2021</year>). <article-title>From error annotation to quantitative analysis: patterns in Russian language learning</article-title>. <source>Russ. Lang. J.</source> <volume>71</volume>, <fpage>9</fpage>. <pub-id pub-id-type="doi">10.4324/9781315105048-8</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chai</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Guan</surname> <given-names>K.</given-names></name> <name><surname>Zhang</surname> <given-names>T.</given-names></name> <name><surname>Xu</surname> <given-names>J.</given-names></name> <name><surname>Niu</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Effects of fatigue on steady state motion visual evoked potentials: optimised stimulus parameters for a zoom motion-based brain-computer interface</article-title>. <source>Comput. Methods Program. Biomed.</source> <volume>196</volume>, <fpage>105650</fpage>. <pub-id pub-id-type="doi">10.1016/j.cmpb.2020.105650</pub-id><pub-id pub-id-type="pmid">32682092</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Huang</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Gao</surname> <given-names>X.</given-names></name></person-group> (<year>2020</year>). <article-title>Combination of augmented reality based brain-computer interface and computer vision for high-level control of a robotic arm</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng.</source> <volume>28</volume>, <fpage>3140</fpage>&#x02013;<lpage>3147</lpage>. <pub-id pub-id-type="doi">10.1109/TNSRE.2020.3038209</pub-id><pub-id pub-id-type="pmid">33196442</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Galkina</surname> <given-names>A.</given-names></name> <name><surname>Alexandra</surname> <given-names>V. R.</given-names></name></person-group> (<year>2019</year>). <article-title>Grammatical interference in written papers translated by Russian and American students</article-title>. <source>Train. Lang. Cult.</source> <volume>3</volume>, <fpage>89</fpage>&#x02013;<lpage>102</lpage>. <pub-id pub-id-type="doi">10.29366/2019tlc.3.3.6</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Gao</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Interface, interaction, and intelligence in generalized brain&#x02013;computer interfaces</article-title>. <source>Trends Cogn. Sci.</source> <volume>25</volume>, <fpage>671</fpage>&#x02013;<lpage>684</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2021.04.003</pub-id><pub-id pub-id-type="pmid">34116918</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hsu</surname> <given-names>H. T.</given-names></name> <name><surname>Shyu</surname> <given-names>K. K.</given-names></name> <name><surname>Hsu</surname> <given-names>C. C.</given-names></name> <name><surname>Lee</surname> <given-names>L. H.</given-names></name> <name><surname>Lee</surname> <given-names>P. L.</given-names></name></person-group> (<year>2021</year>). <article-title>Phase-approaching stimulation sequence for SSVEP-based BCI: a practical use in VR/ARHMD</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng.</source> <volume>29</volume>, <fpage>2754</fpage>&#x02013;<lpage>2764</lpage>. <pub-id pub-id-type="doi">10.1109/TNSRE.2021.3131779</pub-id><pub-id pub-id-type="pmid">34847036</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>C.</given-names></name> <name><surname>Wen</surname> <given-names>Z.</given-names></name> <name><surname>Lan</surname> <given-names>Y.</given-names></name> <name><surname>Fei</surname> <given-names>C.</given-names></name> <name><surname>Hao</surname> <given-names>Y.</given-names></name> <name><surname>Cheng</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>A novel WebVR-based lightweight framework for virtual visualization of blood vasculum</article-title>. <source>IEEE Access</source> <volume>6</volume>, <fpage>27726</fpage>&#x02013;<lpage>27735</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2840494</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jensen</surname> <given-names>L.</given-names></name> <name><surname>Konradsen</surname> <given-names>F.</given-names></name></person-group> (<year>2018</year>). <article-title>A review of the use of virtual reality head-mounted displays in education and training</article-title>. <source>Educ. Inf. Technol.</source> <volume>23</volume>, <fpage>1515</fpage>&#x02013;<lpage>1529</lpage>. <pub-id pub-id-type="doi">10.1007/s10639-017-9676-0</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kamhi-Stein</surname> <given-names>L. D.</given-names></name> <name><surname>Lao</surname> <given-names>R. S.</given-names></name> <name><surname>Issagholian</surname> <given-names>N.</given-names></name></person-group> (<year>2020</year>). <article-title>The future is now: implementing mixed-reality learning environments as a tool for language teacher preparation</article-title>. <source>TESL-EJ</source> <volume>24</volume>, <fpage>n3</fpage>. <pub-id pub-id-type="doi">10.1007/s10956-020-09837-5</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ke</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>P.</given-names></name> <name><surname>An</surname> <given-names>X.</given-names></name> <name><surname>Song</surname> <given-names>X.</given-names></name> <name><surname>Ming</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>An online SSVEP-BCI system in an optical see-through augmented reality environment</article-title>. <source>J. Neural Eng.</source> <volume>17</volume>, <fpage>016066</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/ab4dc6</pub-id><pub-id pub-id-type="pmid">31614342</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Keihani</surname> <given-names>A.</given-names></name> <name><surname>Shirzhiyan</surname> <given-names>Z.</given-names></name> <name><surname>Farahi</surname> <given-names>M.</given-names></name> <name><surname>Shamsi</surname> <given-names>E.</given-names></name> <name><surname>Mahnam</surname> <given-names>A.</given-names></name> <name><surname>Makkiabadi</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Use of sine shaped high-frequency rhythmic visual stimuli patterns for SSVEP response analysis and fatigue rate evaluation in normal subjects</article-title>. <source>Front. Hum. Neurosci.</source> <volume>12</volume>, <fpage>201</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2018.00201</pub-id><pub-id pub-id-type="pmid">29892219</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khashimova</surname> <given-names>D.</given-names></name> <name><surname>Niyazova</surname> <given-names>N.</given-names></name> <name><surname>Nasirova</surname> <given-names>U.</given-names></name> <name><surname>Israilova</surname> <given-names>D.</given-names></name> <name><surname>Khikmatov</surname> <given-names>N.</given-names></name> <name><surname>Fayziev</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>The role of electronic literature in the formation of speech skills and abilities of learners and students in teaching Russian language with the Uzbek language of learning (on the example of electronic multimedia textbook in Russian language)</article-title>. <source>J. Lang. Linguist. Stud.</source> <volume>17</volume>, <fpage>445</fpage>&#x02013;<lpage>461</lpage>. <pub-id pub-id-type="doi">10.52462/jlls.28</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>S.</given-names></name> <name><surname>Lee</surname> <given-names>S.</given-names></name> <name><surname>Kang</surname> <given-names>H.</given-names></name> <name><surname>Kim</surname> <given-names>S.</given-names></name> <name><surname>Ahn</surname> <given-names>M</given-names></name></person-group>. (2021) <article-title>P300 brain&#x02013;computer interface-based drone control in virtual augmented reality</article-title>. <source>Sensors</source> <volume>21</volume>, <fpage>5765</fpage>. <pub-id pub-id-type="doi">10.3390/s21175765</pub-id><pub-id pub-id-type="pmid">34502655</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>M.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Guo</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Research and application of situated teaching design for NC machining course based on virtual simulation technology</article-title>. <source>Comput. Appl. Eng. Educ.</source> <volume>28</volume>, <fpage>658</fpage>&#x02013;<lpage>674</lpage>. <pub-id pub-id-type="doi">10.1002/cae.22234</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>Han</surname> <given-names>M.</given-names></name> <name><surname>Cohen</surname> <given-names>G. L.</given-names></name> <name><surname>Markus</surname> <given-names>H. R.</given-names></name></person-group> (<year>2021</year>). <article-title>Passion matters but not equally everywhere: Predicting achievement from interest, enjoyment, and efficacy in 59 societies</article-title>. <source>Proceed. Natl. Acad. Sci.</source> <volume>118</volume>, <fpage>e2016964118</fpage>. <pub-id pub-id-type="doi">10.1073/pnas.2016964118</pub-id><pub-id pub-id-type="pmid">33712544</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>S.</given-names></name></person-group> (<year>2022</year>). <article-title>Construction of situational teaching mode in ideological and political classroom based on digital twin technology</article-title>. <source>Comput. Electric. Eng.</source> <volume>101</volume>, <fpage>108104</fpage>. <pub-id pub-id-type="doi">10.1016/j.compeleceng.2022.108104</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Markova</surname> <given-names>E. M.</given-names></name> <name><surname>Kvapil</surname> <given-names>R.</given-names></name></person-group> (<year>2021</year>). <article-title>Teaching Russian in a closely-related Slovak environment</article-title>. <source>Russ. Lang. Stud.</source> <volume>19</volume>, <fpage>191</fpage>&#x02013;<lpage>206</lpage>. <pub-id pub-id-type="doi">10.22363/2618-8163-2021-19-2-191-206</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moro</surname> <given-names>C.</given-names></name> <name><surname>Phelps</surname> <given-names>C.</given-names></name> <name><surname>Redmond</surname> <given-names>P.</given-names></name> <name><surname>Stromberga</surname> <given-names>Z.</given-names></name></person-group> (<year>2021</year>). <article-title>HoloLens and mobile augmented reality in medical and health science education: a randomised controlled trial</article-title>. <source>Br. J. Educ. Technol.</source> <volume>52</volume>, <fpage>680</fpage>&#x02013;<lpage>694</lpage>. <pub-id pub-id-type="doi">10.1111/bjet.13049</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Panova</surname> <given-names>E.</given-names></name> <name><surname>Tjumentseva</surname> <given-names>E.</given-names></name> <name><surname>Koroleva</surname> <given-names>I.</given-names></name> <name><surname>Ibragimova</surname> <given-names>E.</given-names></name> <name><surname>Samusenkov</surname> <given-names>V.</given-names></name></person-group> (<year>2021</year>). <article-title>Organization of project work with the help of digital technologies in teaching Russian as a foreign language at the initial stage</article-title>. <source>Int. J. Emerg. Technol. Learn. (iJET)</source> <volume>16</volume>, <fpage>208</fpage>&#x02013;<lpage>220</lpage>. <pub-id pub-id-type="doi">10.3991/ijet.v16i22.20573</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Papanastasiou</surname> <given-names>G.</given-names></name> <name><surname>Drigas</surname> <given-names>A.</given-names></name> <name><surname>Skianis</surname> <given-names>C.</given-names></name> <name><surname>Lytras</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Brain computer interface based applications for training and rehabilitation of students with neurodevelopmental disorders. A literature review</article-title>. <source>Heliyon</source> <volume>6</volume>, <fpage>e04250</fpage>. <pub-id pub-id-type="doi">10.1016/j.heliyon.2020.e04250</pub-id><pub-id pub-id-type="pmid">32954024</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Philippe</surname> <given-names>S.</given-names></name> <name><surname>Souchet</surname> <given-names>A. D.</given-names></name> <name><surname>Lameras</surname> <given-names>P.</given-names></name> <name><surname>Petridis</surname> <given-names>P.</given-names></name> <name><surname>Caporal</surname> <given-names>J.</given-names></name> <name><surname>Coldeboeuf</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Multimodal teaching, learning and training in virtual reality: a review and case study</article-title>. <source>Virtual Real. Intel. Hardw.</source> <volume>2</volume>, <fpage>421</fpage>&#x02013;<lpage>442</lpage>. <pub-id pub-id-type="doi">10.1016/j.vrih.2020.07.008</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shaby</surname> <given-names>N.</given-names></name> <name><surname>Staus</surname> <given-names>N.</given-names></name> <name><surname>Dierking</surname> <given-names>L. D.</given-names></name> <name><surname>Falk</surname> <given-names>J. H.</given-names></name></person-group> (<year>2021</year>). <article-title>Pathways of interest and participation: how STEM&#x02014;interested youth navigate a learning ecosystem</article-title>. <source>Sci. Educ.</source> <volume>105</volume>, <fpage>628</fpage>&#x02013;<lpage>652</lpage>. <pub-id pub-id-type="doi">10.1002/sce.21621</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shin</surname> <given-names>J. H.</given-names></name> <name><surname>Kwon</surname> <given-names>J.</given-names></name> <name><surname>Kim</surname> <given-names>J. U.</given-names></name> <name><surname>Ryu</surname> <given-names>H.</given-names></name> <name><surname>Ok</surname> <given-names>J.</given-names></name> <name><surname>Joon Kwon</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Wearable EEG electronics for a brain&#x02014;AI closed-loop system to enhance autonomous machine decision-making</article-title>. <source>npj Flex. Electron.</source> <volume>6</volume>, <fpage>1</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1038/s41528-022-00164-w</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thielen</surname> <given-names>J.</given-names></name> <name><surname>Marsman</surname> <given-names>P.</given-names></name> <name><surname>Farquhar</surname> <given-names>J.</given-names></name> <name><surname>Desain</surname> <given-names>P.</given-names></name></person-group> (<year>2021</year>). <article-title>From full calibration to zero training for a code-modulated visual evoked potentials for brain&#x02013;computer interface</article-title>. <source>J. Neural Eng.</source> <volume>18</volume>, <fpage>056007</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/abecef</pub-id><pub-id pub-id-type="pmid">33690182</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Unlu</surname> <given-names>E. A.</given-names></name></person-group> (<year>2019</year>). <article-title>Pinpointing the role of the native language in L2 learning: acquisition of spatial prepositions in English by Russian and Turkish native speakers</article-title>. <source>Appl. Linguist. Rev.</source> <volume>10</volume>, <fpage>241</fpage>&#x02013;<lpage>258</lpage>. <pub-id pub-id-type="doi">10.1515/applirev-2016-1009</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vitalyevna</surname> <given-names>C. Y.</given-names></name></person-group> (<year>2021</year>). Interactive methods of teaching Russian literature in schools with uzbek language learning. Orient. Renaiss. Innov. Educ. Natural Soc. Sci. <volume>1</volume>, <fpage>1169</fpage>&#x02013;<lpage>1174</lpage>. <pub-id pub-id-type="doi">10.31862/2218-8711-2021-3-227-234</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>M.</given-names></name> <name><surname>Li</surname> <given-names>R.</given-names></name> <name><surname>Zhang</surname> <given-names>R.</given-names></name> <name><surname>Li</surname> <given-names>G.</given-names></name> <name><surname>Zhang</surname> <given-names>D. A.</given-names></name></person-group> (<year>2018</year>). <article-title>Wearable SSVEP-based, BCI system for quadcopter control using head-mounted device</article-title>. <source>IEEE Access</source> <volume>6</volume>, <fpage>26789</fpage>&#x02013;<lpage>26798</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2825378</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>P.</given-names></name> <name><surname>Wu</surname> <given-names>P.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Chi</surname> <given-names>H. L.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name></person-group> (<year>2018</year>). <article-title>A critical review of the use of virtual reality in construction engineering education and training</article-title>. <source>Int. J. Environ. Res. Public Health</source> <volume>15</volume>, <fpage>1204</fpage>. <pub-id pub-id-type="doi">10.3390/ijerph15061204</pub-id><pub-id pub-id-type="pmid">29890627</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><collab>Wang Y, and Hu, W..</collab></person-group> (<year>2022</year>). <article-title>Intelligent software-driven immersive environment for online political guiding based on brain-computer interface and autonomous systems</article-title>. <source>Automat. Softw. Eng.</source> <volume>29</volume>, <fpage>1</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1007/s10515-021-00300-2</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xiao-Dong</surname> <given-names>L.</given-names></name> <name><surname>Hong-Hui</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Research on VR-supported flipped classroom based on blended learning&#x02014;a case study in &#x0201C;learning english through news.&#x0201D;</article-title>. <source>Int. J. Inf. Educ. Technol.</source> <volume>10</volume>, <fpage>104</fpage>&#x02013;<lpage>109</lpage>. <pub-id pub-id-type="doi">10.18178/ijiet.2020.10.2.1347</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>R.</given-names></name> <name><surname>Xu</surname> <given-names>Z.</given-names></name> <name><surname>Zhang</surname> <given-names>L.</given-names></name> <name><surname>Cao</surname> <given-names>L.</given-names></name> <name><surname>Hu</surname> <given-names>Y.</given-names></name> <name><surname>Lu</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>The effect of stimulus number on the recognition accuracy and information transfer rate of SSVEP&#x02013;BCI in augmented reality</article-title>. <source>J. Neural Eng.</source> <volume>19</volume>, <fpage>036010</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2552/ac6ae5</pub-id><pub-id pub-id-type="pmid">35477130</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Shi</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Augmented reality (AR) learning application based on the perspective of situational learning: high efficiency study of combination of virtual and real</article-title>. <source>Psychology</source> <volume>11</volume>, <fpage>1340</fpage>&#x02013;<lpage>1348</lpage>. <pub-id pub-id-type="doi">10.4236/psych.2020.119086</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zoda</surname> <given-names>L.</given-names></name></person-group> (<year>2022</year>). <article-title>The main difficulties arising during the mastering of the Russian (foreign) language by students of national groups</article-title>. <source>Asian J. Res. Soc. Sci. Human.</source> <volume>12</volume>, <fpage>77</fpage>&#x02013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.5958/2249-7315.2022.00126.5</pub-id></citation>
</ref>
</ref-list> 
</back>
</article>