<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2022.910677</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The Teaching Strategy of Socio-Political Education by Deep Learning Under Educational Psychology</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Chen</surname><given-names>Zhen</given-names></name><xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author"><name><surname>Wen</surname><given-names>Lan</given-names></name><xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author"><name><surname>He</surname><given-names>Xiaoqing</given-names></name><xref rid="aff3" ref-type="aff"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes"><name><surname>Chen</surname><given-names>Peiyao</given-names></name><xref rid="aff4" ref-type="aff"><sup>4</sup></xref><xref rid="c001" ref-type="corresp"><sup>&#x002A;</sup></xref>
</contrib>
<contrib contrib-type="author"><name><surname>Wu</surname><given-names>Hua</given-names></name><xref rid="aff4" ref-type="aff"><sup>4</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1729089/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>School of Marxism Northeastern University Shenyang</institution>, <addr-line>Shenyang</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>South China Business College, Guangdong University of Foreign Studies/Faculty of Education, Guangxi Normal University</institution>, <addr-line>Guangzhou</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>School of Marxism, Chengdu Normal University</institution>, <addr-line>Chengdu</addr-line>, <country>China</country></aff>
<aff id="aff4"><sup>4</sup><institution>School of History and Culture, University of Birmingham</institution>, <addr-line>Birmingham</addr-line>, <country>United Kingdom</country></aff>
<author-notes>
<fn id="fn0001" fn-type="edited-by">
<p>Edited by: Ruey-Shun Chen, National Chiao Tung University, Taiwan</p>
</fn>
<fn id="fn0002" fn-type="edited-by">
<p>Reviewed by: Christos Troussas, University of West Attica, Greece; Mehwish Naseer, HITEC University, Pakistan</p>
</fn>
<corresp id="c001">&#x002A;Correspondence: Peiyao Chen, <email>chenpeiyao199304@163.com</email></corresp>
<fn id="fn0003" fn-type="other">
<p>This article was submitted to Educational Psychology, a section of the journal Frontiers in Psychology</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>07</day>
<month>02</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>13</volume>
<elocation-id>910677</elocation-id>
<history>
<date date-type="received">
<day>01</day>
<month>04</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>20</day>
<month>06</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Chen, Wen, He, Chen and Wu.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Chen, Wen, He, Chen and Wu</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>This study aims to optimize the teaching content of ideological and political courses and guide students to establish correct values. Inspired by Artificial Intelligence, the K-means clustering algorithm was applied to the neural collaborative filtering algorithm through temporal data. Besides, a deep learning algorithm was designed for the improved matrix factorization. The evaluation indicators were selected through experiments. The relevant data sets were used for simulation and testing. The test results indicated that the Root Mean Square Error of this scheme was 1.251, and the Mean Absolute Error was 0.625. These index measurement values were better than those of similar algorithms, indicating this model has better performance after optimization and can recommend suitable courses. The innovative algorithm designed to construct the classroom teaching model of social and political education can accurately recommend proper courses according to the students&#x2019; learning situation reflecting their psychological states. The research provides adaptive teaching for students, enables interaction between teachers and students, and helps students form correct values. It also has an important role in improving the teaching strategies.</p>
</abstract>
<kwd-group>
<kwd>educational psychology</kwd>
<kwd>deep learning</kwd>
<kwd>socio-political education</kwd>
<kwd>teaching strategy</kwd>
<kwd>teaching strategies</kwd>
</kwd-group>
<counts>
<fig-count count="8"/>
<table-count count="2"/>
<equation-count count="17"/>
<ref-count count="33"/>
<page-count count="10"/>
<word-count count="6456"/>
</counts>
</article-meta>
</front>
<body>
<sec id="sec1" sec-type="intro">
<title>Introduction</title>
<p>Students are a vital force for the future development of a country, who form their outlook on life during the campus period (<xref ref-type="bibr" rid="ref16">Qian et al., 2018</xref>). With a proliferation of information, students are easily adversely affected by complex information that can distort their perspectives. Therefore, all parties with educational responsibilities must strengthen education and guidance of students&#x2019; outlook on life to ensure the quality of talents (<xref ref-type="bibr" rid="ref2">Chen, 2019</xref>). At present, contemporary educational psychology is increasingly frequently combined with information technologies for cross-disciplinary research, such as deep learning (DL) technology (<xref ref-type="bibr" rid="ref18">Rogoza et al., 2018</xref>). The recommendation algorithm based on DL is common in the education field. The performance of the recommendation algorithm has a key impact on using educational psychology theory to provide students with adaptive teaching, so it is crucial to improve the performance of the recommendation algorithm (<xref ref-type="bibr" rid="ref27">Wu et al., 2019</xref>).</p>
<p>The development of recommendation algorithms mirrors the development of recommendation systems (<xref ref-type="bibr" rid="ref6">Fang and Lu, 2021</xref>). The earliest recommendation algorithms were designed to bring personalized movie viewing services to movie watchers. Later, some scholars applied collaborative algorithms to recommender systems, such as the item-based collaborative filtering algorithm (CFA). With the continuous development of recommender systems, there are an increasing types of recommendation methods, such as content recommendation, collaborative filtering, and hybrid intelligent recommendation (<xref ref-type="bibr" rid="ref27">Wu et al., 2019</xref>). They offer many possibilities for modern education and personalized recommendation education. However, when data values are missing, these recommendation algorithms face the problem of accuracy decline and data sparsity (<xref ref-type="bibr" rid="ref24">Wu et al., 2020</xref>).</p>
<p>The research motivation is to resolve the problem of data sparseness by using an improved recommendation algorithm based on the neural CFA combining the advantages of DL and traditional CFA (<xref ref-type="bibr" rid="ref25">Wu and Song, 2019</xref>). Traditional Matrix Factorization (MF) was used to discover the linear relationships between users and items. Besides, multi-layer neural networks were used to explore the nonlinear relationships between users and items to reduces the interference of missing data values and optimize the data accuracy. Curriculum resources should change as teaching continues to improve. The K-means clustering algorithm was applied to neural CFA through temporal data to maximize course resource optimization. The relevant evaluation indicators were selected through experiments on relevant datasets. The experimental results demonstrated that the optimized algorithm showed good performance. The novelty of the research lies in using neural CFA to design an algorithm model to recommend suitable social and political courses according to the actual situation and promote teacher-student interaction. The research findings make an essential contribution to improving teaching strategies in ideological and political courses.</p>
</sec>
<sec id="sec2">
<title>Recent Related Work</title>
<p><xref ref-type="bibr" rid="ref17">Ren et al. (2021)</xref> predicted the high dropout rate faced by the MOOC platform by combining DL and ensemble learning. The authors used convolutional neural networks to extract hidden features from raw data and utilized the output features as the input of the ensemble learning model. Then, various traditional classification methods are used for training and prediction, and the prediction results of various models are fused as the final result. The research has important reference value for constructing the prediction model of students&#x2019; dropout behavior. <xref ref-type="bibr" rid="ref21">Troussas et al. (2020a)</xref> proposed a framework employing artificial neural networks and weighted models for students&#x2019; collaborative learning styles to provide learners with recommendations for collaborative activities. The authors evaluated this framework through statistical hypothesis testing. The results showed that the proposed method was highly pedagogical and practical and positively impacted learning outcomes. <xref ref-type="bibr" rid="ref15">Pu et al. (2020)</xref> put forward a general framework for AI planning or decision-making for teaching sequencing of deep reinforcement learning frameworks. Both simulations and experiments in the classroom confirmed the validity of the research method. <xref ref-type="bibr" rid="ref22">Troussas et al. (2020b)</xref> programmed computers with a teaching strategy of adaptive learning activities. The results showed that the proposed method outperformed other methods that lacked adaptability in terms of domain knowledge and learning theory, significantly improving student learning ability. <xref ref-type="bibr" rid="ref4">Citakoglu (2021)</xref> conducted long-term monthly temperature estimation and research in Turkey by comparing multiple Artificial Intelligence learning models (<xref ref-type="bibr" rid="ref10">Li et al., 2019</xref>). They proved the validity of the proposed method through a survey of monthly air temperature using data from 250 measuring stations in Turkey. To sum up, the research on adaptive teaching strategies in psychology from the perspective of educational psychology has important reference value for applying DL in the education field. This work studied the improved recommendation algorithm, and the test results demonstrated the excellent performance of the optimized algorithm. It can provide appropriate course recommendations on social and political courses and facilitate the improvement of course resources and teaching methods.</p>
</sec>
<sec id="sec3" sec-type="materials|methods">
<title>Materials and Methods</title>
<sec id="sec4">
<title>Research on Educational Psychology and Teaching Strategies</title>
<p>The socio-political courses at schools are an important part of students&#x2019; outlook on life education to help them develop a healthy and positive attitude and establish a correct outlook on life. However, there are still many problems in the students&#x2019; view of life education. Educators need to identify these issues and use appropriate methods to address them (<xref ref-type="bibr" rid="ref32">Yuan and Wu, 2020</xref>). It is the rigid requirements for the development of students&#x2019; view of life education to conform to the times and apply new educational concepts (<xref ref-type="bibr" rid="ref26">Wu et al., 2021</xref>). Only by following the laws of students&#x2019; psychological development and educational principles can education guide students to establish a correct outlook on life in a good and positive way and promote their healthy and positive growth (<xref ref-type="bibr" rid="ref28">Wu and Wu, 2017</xref>). Educational psychology is a process of studying the basic laws of teaching and learning in the educational and teaching environment, analyzing and optimizing teaching and learning methods, and exploring students&#x2019; abilities and potentials. In short, educational psychology is a psychology-based discipline to study humans&#x2019; learning, educational intervention, instructional psychology, and social psychology in the context of education (<xref ref-type="bibr" rid="ref33">Zheng et al., 2018</xref>). Educational psychology focuses on the application of psychological theory or the results of psychological research to education, including designing curriculum, to promote students&#x2019; academic motivation and help students face setbacks in life, learning, and growth.</p>
<p>Educational psychology primarily studies students&#x2019; mastery of knowledge and skills and the psychological phenomena and development laws under the conditions of education and teaching, such as their moral standards and personality formation. Therefore, educational psychology has unique characteristics different from pedagogy and psychology. Learning and teaching is an interactive system running through the entire learning process, involving factors such as students, teachers, teaching content, teaching environment, and teaching media. The teaching process and the evaluation process interact and have the nature of pedagogical and psychological tasks. The essential features, functions, and meanings of educational psychology can be summarized through retrospective, quantitative, and qualitative methods, which is of great significance for promoting the development of educational psychology. With the development of society, the application of information technology becomes the research content of educational psychology. <xref rid="fig1" ref-type="fig">Figure 1</xref> presents research trends in educational psychology (<xref ref-type="bibr" rid="ref29">Xiao and Shen, 2019</xref>).</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Trends in educational psychology research.</p>
</caption>
<graphic xlink:href="fpsyg-13-910677-g001.tif"/>
</fig>
<p>As shown in <xref rid="fig1" ref-type="fig">Figure 1</xref>, the research trend of educational psychology lies in applying information technology to fully mobilize the learner&#x2019;s autonomy and initiative and improve learning interest and learning Effect under the influence of the social environment and cultural background. Therefore, DL technology was used in educational psychology to analyze the curriculum content that students need and help teachers analyze their learning situation in this paper.</p>
</sec>
<sec id="sec5">
<title>Course Recommendation Algorithm</title>
<p>At present, the newly developed course recommendation algorithms have the some problems need to be solved, as shown in <xref rid="fig2" ref-type="fig">Figure 2</xref>.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Deficiencies of the existing course recommendation algorithm.</p>
</caption>
<graphic xlink:href="fpsyg-13-910677-g002.tif"/>
</fig>
<p>The sparse data in the course recommendation algorithm presented in <xref rid="fig2" ref-type="fig">Figure 2</xref> is because the previous CFA offers relevant recommendations based on the analysis of the user&#x2019;s past information (<xref ref-type="bibr" rid="ref30">Xiao et al., 2018</xref>). However, information on user ratings of relevant items is incomplete, and it is impossible to determine whether the ratings are accurate in actual use environments. Scoring may become a redundant item after a user&#x2019;s coursework is complete. Correspondingly, users do not perform scoring operations, resulting in fewer data sources (<xref ref-type="bibr" rid="ref001">Zhu et al., 2018</xref>). The accuracy of recommendation algorithms cannot be guaranteed under the combined effect of two factor: the continuous update and development of learning resources and the dynamic learning process of users (<xref ref-type="bibr" rid="ref20">Suganya et al., 2020</xref>). Furthermore, temporal information is quite different from auxiliary information. Temporal information has no intervals but is continuously increased due to the influx of other data; besides, temporal information is not directly input (<xref ref-type="bibr" rid="ref14">Perrotta and Selwyn, 2020</xref>).</p>
<p>Neural networks can discover high-level features while learning nonlinear networks of users and objects, resolving the above three problems and ameliorating data loss.</p>
</sec>
<sec id="sec6">
<title>Neural Networks in Deep Learning</title>
<p>Teachers can use the course recommendation system to teach students the actually necessary knowledge (<xref ref-type="bibr" rid="ref31">You, 2019</xref>). Due to the lack of face-to-face communication between teachers and students in the process of remote teaching, teachers just consider the learning situation of most students in the process of explaining knowledge. This condition may lead to a gradual disconnect between the classroom lessons and students&#x2019; needs (<xref ref-type="bibr" rid="ref9">Knowles, 2018</xref>). Course recommendation systems can efficiently solve this problem. The combination of DL and traditional recommendation algorithms can help teachers gain a comprehensive understanding of students&#x2019; individual needs (<xref ref-type="bibr" rid="ref11">Li and Ye, 2020</xref>).</p>
<p>Neural networks can input complex problems into algorithmic models for analysis. According to the characteristics of the problem, the relationship between the autocorrelation nodes in the model is adjusted to improve the processing efficiency. A neuron is the most fundamental unit of a model. Each neuron can receive multiple inputs. The weight of the input data affects the entire neuron (<xref ref-type="bibr" rid="ref5">Fan et al., 2021</xref>). <xref rid="fig3" ref-type="fig">Figure 3</xref> displays the neural network composed of neurons.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Structure of the neural network.</p>
</caption>
<graphic xlink:href="fpsyg-13-910677-g003.tif"/>
</fig>
<p>In a neural network, the function of the input layer is to receive external signals. The hidden layer is usually located between the input layer and the output layer. The network finds suitable solutions through data processing and analysis and continuous network learning and training (<xref ref-type="bibr" rid="ref23">Wang, 2021</xref>). A neural network&#x2019;s structure determines its output.</p>
<p>In a neural network, input data is transformed into output data at the nodes of each layer and then applied to the next node (<xref ref-type="bibr" rid="ref13">Nassar et al., 2020</xref>). The activation function refers to the functional relationship between two factors to efficiently transform the output of the previous layer into the input of the next layer. It can also map from a neuron&#x2019;s input to its output. Using an activation function can turn the linear relationship between the input and output of a network layer into a nonlinear one to explore deep relationships. The most frequently used activation functions are the Sigmoid function, Tanh function, and Relu function (<xref ref-type="bibr" rid="ref1">Bobadilla et al., 2020</xref>).</p>
<p>The input values and output results of the Sigmoid function all range between 0 and 1. It can achieve satisfying results when the differences in object characteristics are not particularly obvious. The problem of vanishing gradient often exists due to the relatively large amount of calculation of the function. The conversion of the input x through function <italic>S</italic> can be expressed as <xref ref-type="disp-formula" rid="EQ1">Equations (1)</xref> and <xref ref-type="disp-formula" rid="EQ2">(2)</xref>.</p>
<disp-formula id="EQ1">
<label>(1)</label>
<mml:math id="M1">
<mml:mrow>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ2">
<label>(2)</label>
<mml:math id="M2">
<mml:mrow>
<mml:msup>
<mml:mi>S</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi mathvariant="normal">e</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:mo>=</mml:mo>
<mml:mi>S</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="normal">S</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>The Tanh function is a hyperbolic function, which can make up for the asymmetry of the Sigmoid function with the origin. Still, it has the disadvantage of gradient saturation. The Tanh function can be written as <xref ref-type="disp-formula" rid="EQ3">Equation (3)</xref>.</p>
<disp-formula id="EQ3">
<label>(3)</label>
<mml:math id="M3">
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mi>x</mml:mi>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mi>x</mml:mi>
</mml:msup>
<mml:mo>+</mml:mo>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>The results of the ReLU function do not suffer from gradient saturation or descent compared with the Sigmoid function and Tanh function. In practical applications, this function is only used to find the maximum value not to calculate the exponent. However, when the function learns or is trained, there are some other problems with gradients, such as neuron deactivation (<xref ref-type="bibr" rid="ref3">Chen et al., 2019</xref>). <xref ref-type="disp-formula" rid="EQ4">Equations (4)</xref> and <xref ref-type="disp-formula" rid="EQ5">(5)</xref> indicate the relationship between the function ReLU and the input x.</p>
<disp-formula id="EQ4">
<label>(4)</label>
<mml:math id="M4">
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mi>max</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mi mathvariant="normal">,x</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ5">
<label>(5)</label>
<mml:math id="M5">
<mml:mrow>
<mml:mi>Re</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>U</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="normal">x</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi>x</mml:mi>
<mml:mo>&#x003E;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mi>i</mml:mi>
<mml:mi>f</mml:mi>
<mml:mi>x</mml:mi>
<mml:mo>&#x2264;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
<sec id="sec7">
<title>CFA Model <italic>via</italic> Neural Networks</title>
<p>CFA can achieve excellent recommendation effect, so it is frequently used. Its working principle is to classify users according to their preference on the basis of analyzing their historical behavior data and then recommends similar products for users of the same category. CFA is also a research hotspot, and researchers have optimized the CFA for recommendation to improve the performance (<xref ref-type="bibr" rid="ref7">Fu et al., 2018</xref>). Existing CFAs use MF to construct hidden features for users and items. However, a simple inner product cannot accurately estimate the complex relationship between users and items, limiting the recommendation accuracy. This study used a neural CFA model to replace the inner product operation in the standard MF to discover the linear and nonlinear relationship between users and items. <xref rid="fig4" ref-type="fig">Figure 4</xref> illustrates the specific framework of the neural CFA.</p>
<fig position="float" id="fig4">
<label>Figure 4</label>
<caption>
<p>Framework of neural collaborative filtering algorithm (CFA).</p>
</caption>
<graphic xlink:href="fpsyg-13-910677-g004.tif"/>
</fig>
<p>The neural CFA model in <xref rid="fig4" ref-type="fig">Figure 4</xref> consists of four layers: the input layer, embedded layer, user layer, and output layer. The input layer takes input users and items and converts them into vectors. For example, the input of &#x1d45b; users needs to be converted into a vector of 1&#x2009;&#x00D7;&#x2009;&#x1d45b;, which can be converted into a sparse vector. After the input reaches the embedded layer, the input vector is multiplied by the embedded matrix &#x1d45d;. For example, there are &#x1d45b; users, the embedded dimension is &#x1d45a;, and the embedding matrix size is &#x1d45a;&#x2009;&#x00D7;&#x2009;&#x1d45b;. Lines refer to the user&#x2019;s embedded vectors. The output layer is used to calculate the final output of the user embedded matrix and the item embedded matrix after the neural collaborative filtering layer.</p>
<sec id="sec8">
<title>Generalized MF Model</title>
<p>MF decomposes the matrix into the product of 1 or n matrices, which can solve the shortcomings of data sparseness in the previous CFA using proximity. The neural CFA model is the Generalized Matrix Factorization (GMF) model on the left. The GMF model uses the product of vectors and outputs vectors. The GMF model can be defined as <xref ref-type="disp-formula" rid="EQ6">Equations (6)</xref>&#x2013;<xref ref-type="disp-formula" rid="EQ7"/><xref ref-type="disp-formula" rid="EQ8"/><xref ref-type="disp-formula" rid="EQ9">(9)</xref>.</p>
<disp-formula id="EQ6">
<label>(6)</label>
<mml:math id="M6">
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>u</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>p</mml:mi>
<mml:mn>1</mml:mn>
</mml:msup>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:msup>
<mml:mi>p</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ7">
<label>(7)</label>
<mml:math id="M7">
<mml:mrow>
<mml:msub>
<mml:mi>q</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>q</mml:mi>
<mml:mn>1</mml:mn>
</mml:msup>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:msup>
<mml:mi>q</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ8">
<label>(8)</label>
<mml:math id="M8">
<mml:mrow>
<mml:mi>&#x03C6;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">p</mml:mi>
<mml:mi>u</mml:mi>
</mml:msub>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="normal">,q</mml:mi>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi mathvariant="normal">p</mml:mi>
<mml:mi>u</mml:mi>
</mml:msub>
<mml:mo>&#x22C5;</mml:mo>
<mml:msub>
<mml:mi>q</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>p</mml:mi>
<mml:mi>q</mml:mi>
</mml:msup>
<mml:msup>
<mml:mi>q</mml:mi>
<mml:mi>q</mml:mi>
</mml:msup>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:msup>
<mml:mi>p</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:msup>
<mml:mi>q</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mi mathvariant="normal">,</mml:mi>
<mml:msup>
<mml:mi>p</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:msup>
<mml:mi>q</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ9">
<label>(9)</label>
<mml:math id="M9">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>&#x03B1;</mml:mi>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>u</mml:mi>
</mml:msub>
<mml:mo>&#x22C5;</mml:mo>
<mml:msub>
<mml:mi>q</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>Among the above equations, <inline-formula>
<mml:math id="M10">
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>u</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M11">
<mml:mrow>
<mml:msub>
<mml:mi>q</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are the input into the embedded layer to obtain potential feature vectors of users and items. The relevance between the user and the item is obtained and passes through the inner product. The final prediction result is obtained and output through the output layer. &#x1d6fc;<sub>&#x1d45c;&#x1d462;&#x1d461;</sub> denotes the Sigmoid function. <inline-formula>
<mml:math id="M12">
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>p</mml:mi>
<mml:mi>u</mml:mi>
</mml:msub>
<mml:mo>&#x22C5;</mml:mo>
<mml:msub>
<mml:mi>q</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> represents the overall output layer.</p>
</sec>
<sec id="sec9">
<title>Artificial Neural Network With Forwarding Structure</title>
<p>Multi-Layer Perceptron (MLP) is a frequently used artificial neural network with a forward structure. Each layer is connected by full connection, and the data transmission method between layers uses forward propagation. Here, MLP was used to compute each output. Back Propagation algorithm was used to find the best parameters.</p>
<p>The &#x1d44e;<sub>&#x210E;</sub> and &#x1d44e;<sub>&#x1d45c;&#x1d462;&#x1d461;</sub> functions are obtained from the hidden layer according to <xref ref-type="disp-formula" rid="EQ10">Equations (10)</xref> and <xref ref-type="disp-formula" rid="EQ11">(11)</xref>, where (&#x1d44e;, &#x1d44f;) refers to the input data of the input layer, &#x1d464;<sub>1</sub> represents the weight of the hidden layer, and &#x1d464;<sub>2</sub> denotes the weight of the output layer.</p>
<disp-formula id="EQ10">
<label>(10)</label>
<mml:math id="M13">
<mml:mrow>
<mml:msub>
<mml:mi>&#x03B1;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">w</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x22C5;</mml:mo>
<mml:mi>&#x03B1;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ11">
<label>(11)</label>
<mml:math id="M14">
<mml:mrow>
<mml:msub>
<mml:mi>&#x03B1;</mml:mi>
<mml:mrow>
<mml:mi>o</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="normal">w</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x22C5;</mml:mo>
<mml:msub>
<mml:mi>&#x03B1;</mml:mi>
<mml:mi>h</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>The MLP framework with the special network structure can quickly and accurately find deep connections between users and objects. <xref rid="fig5" ref-type="fig">Figure 5</xref> reveals the framework of MLP.</p>
<fig position="float" id="fig5">
<label>Figure 5</label>
<caption>
<p>Framework of multi-layer perceptron (MLP).</p>
</caption>
<graphic xlink:href="fpsyg-13-910677-g005.tif"/>
</fig>
<p>Unlike the GMF model&#x2019;s processing of user and item embedded vectors in the embedding layer, MLP inputs the user and item feature vectors Pu and Qi into a multi-layer neural network to obtain the final scoring result.</p>
</sec>
</sec>
<sec id="sec10">
<title>Construction of the Neural CFA Model Using Temporal Auxiliary Information</title>
<p>In practice, the time variable of curriculum resources has a greater effect than other resources. As time changes, the existing recommendation algorithms usually does not consider the temporal information elements. The algorithm generally defaults to the user&#x2019;s preference being fixed. In the real teaching process, the user&#x2019;s preference will change over time. Therefore, incorporating DL into the algorithm is still not enough, so relevant temporal auxiliary information to is added to discover the user&#x2019;s dynamic preferences for the recommendation effect optimization. Compared with other recommendation models, the course resource recommendation model designed here considers time, the continuous advancement of learning stages, and the dynamic update of course resources. Each user is likely to face different learning tasks at different stages. Courses usually span a long time, and it is not essential to recommend courses that students do not see often. The K-means clustering algorithm is classified by temporal information and combined into the MLP model and the GMF model as a time feature vector to construct a neural CFA model integrating temporal auxiliary information. This model realizes a dynamic recommendation with high accuracy.</p>
<p>The K-means clustering algorithm is a partitional clustering algorithm. It randomly selects several numbers on the data set as the initial center value. Then compare the existing numbers in the data set with the obtained center value. The existing numbers in the dataset are then compared with the resulting central value to calculate the distance between the central value and each number. Calculations document the correlation of these numbers to the center. Each number is assigned to the cluster center closest to it. The gap between the cluster centers represents each number&#x2019;s cluster, which will later be assigned a number. The algorithm loops when the cluster centers change. When the center value of the cluster does not change, the algorithm ends. This method divides the data into several categories. <xref ref-type="disp-formula" rid="EQ12">Equation (12)</xref> describes the distance <italic>y</italic>.</p>
<disp-formula id="EQ12">
<label>(12)</label>
<mml:math id="M15">
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>min</mml:mi>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:munderover>
<mml:munder>
<mml:mrow>
<mml:mi>min</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mn>..</mml:mn>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:munder>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mi>&#x03BC;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</disp-formula>
<p>In <xref ref-type="disp-formula" rid="EQ12">Equation (12)</xref>, &#x1d465;<sub>&#x1d456;</sub> denotes the data in the data set; &#x1d707; stands for the cluster center value; <italic>k</italic> signifies the number of initial clusters.</p>
<p>K-means algorithm can find the correlation between data in the messy data information for classification. Time is unique auxiliary information. It can be infinitely large. Therefore, using time as auxiliary information in the model increases the complexity of the model. The K-means algorithm can solve this problem by mapping temporal information to an interval. The Improved Neural Matrix Factorization (Neu MF) is input as auxiliary information. <xref rid="fig6" ref-type="fig">Figure 6</xref> provides the basic framework of the Improved Neu MF model.</p>
<fig position="float" id="fig6">
<label>Figure 6</label>
<caption>
<p>Improved Neu MF model.</p>
</caption>
<graphic xlink:href="fpsyg-13-910677-g006.tif"/>
</fig>
<p>The network layers will increase when the Improved Neu MF model conducts learning and training, slowing down the convergence during training. Batch standardization uses standard methods to apply the distribution of the input values of each layer of the neural network to accelerate the entire training speed. The batch normalization layer is used in the MLP model to alleviate the over-fitting problem when the training speed increases. The obtained deep-level feature vectors are output using the Sigmoid function after linear learning and nonlinear learning. <xref ref-type="disp-formula" rid="EQ13">Equations (13</xref>&#x2013;<xref ref-type="disp-formula" rid="EQ14"/><xref ref-type="disp-formula" rid="EQ15"/><xref ref-type="disp-formula" rid="EQ16"/><xref ref-type="disp-formula" rid="EQ17">17)</xref> indicate the Improved Neu MF model.</p>
<disp-formula id="EQ13">
<label>(13)</label>
<mml:math id="M16">
<mml:mrow>
<mml:msup>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x22C5;</mml:mo>
<mml:msubsup>
<mml:mi>q</mml:mi>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x22C5;</mml:mo>
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ14">
<label>(14)</label>
<mml:math id="M17">
<mml:mrow>
<mml:msup>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mi mathvariant="normal">MLP</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mi>&#x03B1;</mml:mi>
<mml:mi mathvariant="normal">L</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi mathvariant="normal">W</mml:mi>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mi mathvariant="normal">T</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>&#x03B1;</mml:mi>
<mml:mrow>
<mml:mi mathvariant="normal">L</mml:mi>
<mml:mo>&#x2010;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo>&#x2026;</mml:mo>
<mml:msub>
<mml:mi>&#x03B1;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>W</mml:mi>
<mml:mn>2</mml:mn>
<mml:mi>T</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>M</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msubsup>
<mml:mi>q</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>M</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2026;</mml:mo>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mi>L</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ15">
<label>(15)</label>
<mml:math id="M18">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mi>&#x03C3;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>h</mml:mi>
<mml:mi>T</mml:mi>
</mml:msup>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msup>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msup>
<mml:mi>X</mml:mi>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>L</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ16">
<label>(16)</label>
<mml:math id="M19">
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>n</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>n</mml:mi>
</mml:munderover>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi mathvariant="normal">real</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="EQ17">
<label>(17)</label>
<mml:math id="M20">
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi mathvariant="normal">u,i</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2208;</mml:mo>
<mml:mi mathvariant="normal">D</mml:mi>
<mml:mo>&#x222A;</mml:mo>
<mml:msup>
<mml:mi mathvariant="normal">D</mml:mi>
<mml:mo>&#x2212;</mml:mo>
</mml:msup>
</mml:mrow>
</mml:munder>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi>log</mml:mi>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi mathvariant="normal">real</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>log</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi mathvariant="normal">real</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>In the above equations, <inline-formula>
<mml:math id="M21">
<mml:mrow>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>M</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M22">
<mml:mrow>
<mml:msubsup>
<mml:mi>p</mml:mi>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> represent the user&#x2019;s input in MLP and GMF; <inline-formula>
<mml:math id="M23">
<mml:mrow>
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>G</mml:mi>
<mml:mi>M</mml:mi>
<mml:mi>F</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula>
<mml:math id="M24">
<mml:mrow>
<mml:msubsup>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> are the categorization information of watch time of user &#x1d462; and item <italic>i</italic> in GMF and MLP. They are input as one-dimensional vectors.</p>
</sec>
<sec id="sec11">
<title>Data Acquisition and Experimental Validation</title>
<p>There are many classroom platforms online, such as NetEase Classroom and Tencent Classroom. The Scrapy framework in Python was employed to crawl the class data in the Netease classroom, generating 283,455 sets of data. The data of users with more than 12 course records was randomly selected from the data sets. Among them, the data of 5,598 users and 533 courses in the data set was used as the experimental data. Data from watched courses were used as the test set, the 100 courses that users did not watch and the rest are used as the training set. The improved CFA was compared with conventional CFA, Neu Matrix Factorization (NeuMF), and improved NeuMF algorithms from the perspectives of Root Mean Square Error (RMSE) and Mean Absolute Error (MAE) for performance verification. The learning rate of the DL model was set to 0.001, the regularization parameter was set to 5.0, the Batch Size was 128, the activation function was Relu, and the number of iterations was 5,000.</p>
</sec>
</sec>
<sec id="sec12">
<title>Experimental Design and Research Results</title>
<sec id="sec13">
<title>Experiment Design and Research Process</title>
<p>This study uses two metrics to measure the quality of course recommendation: MAE and RMSE. These two evaluation indicators are widely used to evaluate the pros and cons of the recommended course algorithm. Specifically, this study treats all course selection records as a positive sample. The most recent courses taken by each user are used for testing, and the remaining courses taken are used for training. K courses are randomly selected from the courses that the user has not taken as negative samples and added to the training set, so the ratio of positive and negative samples in the training set is 1:K. Here, the hyperparameter K is set to 4. In addition, it takes a lot of time to recommend and sort all courses for each user due to the large number of courses in the MOOC platform. Therefore, this paper randomly selects some courses from the ones that the target users have not taken as negative samples and adds them to the test set. In this paper, each positive sample in the test set corresponds to 19 negative samples, so each user in the test set corresponds to 20 interactions. This method is also extensively used in other related studies.</p>
<p>The performance of the algorithm is significantly improved when the size of the embedding layer is increased from 8 to 16. The reason is that part of the feature information is lost when the embedding layer is too small, limiting the representation ability of the model. However, the performance improvement becomes smaller when the size of the embedding layer increases from 16 to 32. This is because most of the feature information can be effectively represented at this time; continuing to increase the embedding layer has a limited improvement in the performance of the model, and at the same time, it will increase the parameters that need to be trained, thereby increasing the convergence time when training the model. The interaction between different indicators needs to be considered when evaluating the course quality. The specific research process of establishing the course quality evaluation system is as follows.</p>
<p>First, a hierarchical structure model is built through AHP software.</p>
<p>Second, the weight coefficient matrix between different indexes of course quality evaluation is established according to the index weight results.</p>
<p>Third, the evaluators are selected to evaluate the teaching quality according to the evaluation data of the relative importance of the indicators in the indicator evaluation system.</p>
</sec>
<sec id="sec14">
<title>Recommendation Algorithm Prediction Accuracy Test</title>
<p><xref rid="tab1" ref-type="table">Table 1</xref> lists the test results of the prediction accuracy of the recommendation algorithm.</p>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>Comparison between RMSE and MAE.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="middle">Index</th>
<th align="center" valign="middle">Neu MF</th>
<th align="center" valign="middle">CFA</th>
<th align="center" valign="middle">Improved Neu MF</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">RMSE</td>
<td align="char" valign="top" char=".">1.372</td>
<td align="char" valign="top" char=".">3.362</td>
<td align="char" valign="top" char=".">1.251</td>
</tr>
<tr>
<td align="left" valign="top">MAE</td>
<td align="char" valign="top" char=".">0.825</td>
<td align="char" valign="top" char=".">2.953</td>
<td align="char" valign="top" char=".">0.625</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In <xref rid="tab1" ref-type="table">Table 1</xref>, the RMSE of Improved Neu MF is 1.251, and the MAE is 0.625. Compared with the other two algorithms, this algorithm has smaller errors and higher accuracy. <xref rid="fig7" ref-type="fig">Figure 7</xref> presents the line chart for performance comparison with highlight the superiority of the algorithm reported here.</p>
<fig position="float" id="fig7">
<label>Figure 7</label>
<caption>
<p>Comparison of root mean square error (RMSE) and mean absolute error (MAE).</p>
</caption>
<graphic xlink:href="fpsyg-13-910677-g007.tif"/>
</fig>
<p>In <xref rid="fig7" ref-type="fig">Figure 7</xref>, the RMSE and MAE of the Improved Neu MF model are lower than those of Neu MF and CFA, showing an apparent advantage in the prediction precision.</p>
</sec>
<sec id="sec15">
<title>Recommendation Performance Test</title>
<p>Hits Ratio (HR) can precisely measure the recommended accuracy, and Normalize Discounted Cumulative Gain (NDCG) can clearly reflect the order of recommended items. The three algorithms are compared experimentally, and the results are summarized in <xref rid="tab2" ref-type="table">Table 2</xref>.</p>
<table-wrap position="float" id="tab2">
<label>Table 2</label>
<caption>
<p>NDCG and HR experimental results.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th align="center" valign="top">NDCG</th>
<th align="center" valign="top">HR</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Neu MF</td>
<td align="char" valign="top" char=".">0.32</td>
<td align="char" valign="top" char=".">0.37</td>
</tr>
<tr>
<td align="left" valign="top">CFA</td>
<td align="char" valign="top" char=".">0.06</td>
<td align="char" valign="top" char=".">0.11</td>
</tr>
<tr>
<td align="left" valign="top">Improved Neu MF</td>
<td align="char" valign="top" char=".">0.42</td>
<td align="char" valign="top" char=".">0.51</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>According to <xref rid="tab2" ref-type="table">Table 2</xref>, Improved Neu MF&#x2019;s NDCG reaches 0.42, and HR is 0.51. Adding time information can improve the course recommendation results compared with the other two algorithms. <xref rid="fig8" ref-type="fig">Figure 8</xref> is a graph to clear reflect the performance gap in terms of NDCG and HR.</p>
<fig position="float" id="fig8">
<label>Figure 8</label>
<caption>
<p>Comparison of the gap between and normalize discounted cumulative gain (NDCG) and HR.</p>
</caption>
<graphic xlink:href="fpsyg-13-910677-g008.tif"/>
</fig>
<p>In <xref rid="fig8" ref-type="fig">Figure 8</xref>, the NDCG and HR of the Improved Neu MF model are higher than those of the comparison algorithms. As <xref rid="fig8" ref-type="fig">Figure 8</xref> and <xref rid="tab2" ref-type="table">Table 2</xref> show, the improved Neu MF algorithm has the highest evaluation index values of NDCG and HR, which are 0.36 and 0.40&#x2009;units higher than CFA with the worst results, respectively, showing apparent advantages. Therefore, the performance of the algorithm is improved due to the addition of temporal information.</p>
</sec>
<sec id="sec16">
<title>Discussion</title>
<p>With the rapid development of the Internet economy, the complex information on the society and the network has caused a significant impact on the life of students. From the perspective of educational psychology, this research analyzes the shortcomings of the existing course recommendation algorithms and finds the unsolved problems, such as data sparsity, time information fusion and accuracy. Therefore, the deep learning algorithms, MLP and CFA algorithms, are adopted to construct an improved CFA recommendation algorithm. Training data sets and test data sets are used to verify the model results. The results demonstrate that the RMSE and MAE of the improved Neu MF model are lower than those of Neu MF and CFA, showing obvious advantages in prediction accuracy. The RMSE value of this scheme is 1.251, and the MAE value is 0.625, which are better than similar algorithms, indicating that the model has better performance after optimization and can recommend suitable courses. In addition, the improved Neu MF algorithm has the highest NDCG and HR evaluation index values, which are 0.36 and 0.40&#x2009;units higher than the CFA with the worst result, respectively, with obvious advantages. <xref ref-type="bibr" rid="ref8">Gulzar and Leema (2018)</xref> studied the curriculum recommendation system based on teaching classification method and described how scholars choose courses in various fields to meet research objectives in non-formal education. The research results showed that the course recommendation algorithm has practical reference value for data availability and course performance. <xref ref-type="bibr" rid="ref19">Shahbazi and Byun (2022)</xref> studied an agent-based network learning system for course recommendation. They found that users face various challenges on online platforms, one of which is the identification of the real information of search results based on these resources. Therefore, this paper can provide theoretical support for the development of deep learning algorithms and has practical guiding significance for the improvement of social and political teaching strategies from the perspective of educational psychology.</p>
</sec>
</sec>
<sec id="sec17" sec-type="conclusions">
<title>Conclusion</title>
<p>In the context of the rapid development of the Internet, students&#x2019; minds are easily influenced by complex information. Currently, DL-based recommendation algorithms are often used in the education field. The performance of the recommendation algorithm has a significant impact on the effect of teacher-student interaction. This work used DL to solve the problem of data sparse. Furthermore, neural CFA was designed by combining the advantages of DL and conventional CFA. Traditional MF is used to find linear relationships between users and items, and the multi-layer neural network was used to detect nonlinear relationships between users and items. This method can reduce the interference of missing data values and enhance recommendation precision. With the advancement of the teaching process, curriculum resources and teaching methods should also be changed accordingly. Therefore, the K-means clustering algorithm was applied to neural CFA through temporal data to maximize the optimization of course resources. Finally, appropriate evaluation metrics were selected to conduct simulation experiments on relevant data sets. The experimental results indicated that the improved Neural CFA model combining the advantages of DL and traditional CFA has superior performance. Teachers can analyze the absent social and political courses and students&#x2019; learning psychology based on educational psychology, promoting the teaching interaction between teachers and students. The main deficiency of the research is that there may be a certain impact on the reliability of the experimental results since this experiment only used the data of a single learning platform. Future research will combine user learning data on multiple platforms to check and optimize the model to improve the Effect of teaching strategies.</p>
</sec>
<sec id="sec18" sec-type="data-availability">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="sec19">
<title>Ethics Statement</title>
<p>The studies involving human participants were reviewed and approved by Chengdu Normal University Ethics Committee. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.</p>
</sec>
<sec id="sec20">
<title>Author Contributions</title>
<p>ZC: writing-original draft preparation and methodology. LW: conceptualization, software, and validation. XH: data curation and formal analysis. PC: writing&#x2014;review and editing and visualization. HW: writing-original draft preparation. All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="sec22" sec-type="disclaimer">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ack>
<p>This work was supported by 2022 Guangzhou Philosophy and Science Development Construction Project (No. 2022GZGJ119), 2021 Youth Innovative Talents Project of Guangdong Universities (No. 2021WQNCX138), and 2021 College General Research Projects (No. 21-006B).</p>
</ack>
<ref-list>
<title>References</title>
<ref id="ref1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bobadilla</surname> <given-names>J.</given-names></name> <name><surname>Alonso</surname> <given-names>S.</given-names></name> <name><surname>Hernando</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Deep learning architecture for collaborative filtering recommender systems</article-title>. <source>Appl. Sci.</source> <volume>10</volume>:<fpage>2441</fpage>. doi: <pub-id pub-id-type="doi">10.3390/app10072441</pub-id></citation></ref>
<ref id="ref2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>The impact of expatriates&#x2019; cross-cultural adjustment on work stress and job involvement in the high-tech industry</article-title>. <source>Front. Psychol.</source> <volume>10</volume>:<fpage>2228</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2019.02228</pub-id>, PMID: <pub-id pub-id-type="pmid">31649581</pub-id></citation></ref>
<ref id="ref3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>W.</given-names></name> <name><surname>Cai</surname> <given-names>F.</given-names></name> <name><surname>Chen</surname> <given-names>H.</given-names></name> <name><surname>Rijke</surname> <given-names>M. D.</given-names></name></person-group> (<year>2019</year>). <article-title>Joint neural collaborative filtering for recommender systems</article-title>. <source>ACM Trans. Manag. Inf. Syst.</source> <volume>37</volume>, <fpage>1</fpage>&#x2013;<lpage>30</lpage>. doi: <pub-id pub-id-type="doi">10.1145/3343117</pub-id></citation></ref>
<ref id="ref4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Citakoglu</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <article-title>Comparison of multiple learning artificial intelligence models for estimation of long-term monthly temperatures in Turkey</article-title>. <source>Arab. J. Geosci.</source> <volume>14</volume>, <fpage>1</fpage>&#x2013;<lpage>16</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s12517-021-08484-3</pub-id></citation></ref>
<ref id="ref5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fan</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Zu</surname> <given-names>D.</given-names></name> <name><surname>Zhang</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <article-title>An automatic optimal course recommendation method for online math education platforms based on Bayesian model</article-title>. <source>Int. J. Technol. Learn.</source> <volume>16</volume>, <fpage>95</fpage>&#x2013;<lpage>107</lpage>. doi: <pub-id pub-id-type="doi">10.3991/ijet.v16i13.24039</pub-id></citation></ref>
<ref id="ref6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fang</surname> <given-names>C.</given-names></name> <name><surname>Lu</surname> <given-names>Q.</given-names></name></person-group> (<year>2021</year>). <article-title>Personalized recommendation model of high-quality education resources for college students based on data mining</article-title>. <source>Complexity</source> <volume>2021</volume>, <fpage>1</fpage>&#x2013;<lpage>11</lpage>. doi: <pub-id pub-id-type="doi">10.1155/2021/9935973</pub-id></citation></ref>
<ref id="ref7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fu</surname> <given-names>M.</given-names></name> <name><surname>Qu</surname> <given-names>H.</given-names></name> <name><surname>Yi</surname> <given-names>Z.</given-names></name> <name><surname>Lu</surname> <given-names>L.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name></person-group> (<year>2018</year>). <article-title>A novel deep learning-based collaborative filtering model for recommendation system</article-title>. <source>IEEE Trans. Cybern.</source> <volume>49</volume>, <fpage>1084</fpage>&#x2013;<lpage>1096</lpage>. doi: <pub-id pub-id-type="doi">10.1109/TCYB.2018.2795041</pub-id>, PMID: <pub-id pub-id-type="pmid">29994436</pub-id></citation></ref>
<ref id="ref8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gulzar</surname> <given-names>Z.</given-names></name> <name><surname>Leema</surname> <given-names>A. A.</given-names></name></person-group> (<year>2018</year>). <article-title>Course recommendation based on query classification approach</article-title>. <source>Int. J. Web-Based Learn. Teach. Technol.</source> <volume>13</volume>, <fpage>69</fpage>&#x2013;<lpage>83</lpage>. doi: <pub-id pub-id-type="doi">10.4018/IJWLTT.2018070105</pub-id></citation></ref>
<ref id="ref9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Knowles</surname> <given-names>R. T.</given-names></name></person-group> (<year>2018</year>). <article-title>Teaching who you are: connecting teachers&#x2019; civic education ideology to instructional strategies</article-title>. <source>Theor. Res. Soc. Educ.</source> <volume>46</volume>, <fpage>68</fpage>&#x2013;<lpage>109</lpage>. doi: <pub-id pub-id-type="doi">10.1080/00933104.2017.1356776</pub-id></citation></ref>
<ref id="ref10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>H.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name> <name><surname>Zhang</surname> <given-names>S.</given-names></name> <name><surname>Zhong</surname> <given-names>Z.</given-names></name> <name><surname>Cheng</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Intelligent learning system based on personalized recommendation technology</article-title>. <source>Neural Comput. &#x0026; Applic.</source> <volume>31</volume>, <fpage>4455</fpage>&#x2013;<lpage>4462</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00521-018-3510-5</pub-id></citation></ref>
<ref id="ref11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Ye</surname> <given-names>Z.</given-names></name></person-group> (<year>2020</year>). <article-title>Course recommendations in online education based on collaborative filtering recommendation algorithm</article-title>. <source>Complexity</source> <volume>2020</volume>, <fpage>1</fpage>&#x2013;<lpage>10</lpage>. doi: <pub-id pub-id-type="doi">10.1155/2020/6619249</pub-id></citation></ref>
<ref id="ref13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nassar</surname> <given-names>N.</given-names></name> <name><surname>Jafar</surname> <given-names>A.</given-names></name> <name><surname>Rahhal</surname> <given-names>Y.</given-names></name></person-group> (<year>2020</year>). <article-title>A novel deep multi-criteria collaborative filtering model for recommendation system</article-title>. <source>Knowl.-Based Syst.</source> <volume>187</volume>:<fpage>104811</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.knosys.2019.06.019</pub-id></citation></ref>
<ref id="ref14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perrotta</surname> <given-names>C.</given-names></name> <name><surname>Selwyn</surname> <given-names>N.</given-names></name></person-group> (<year>2020</year>). <article-title>Deep learning goes to school: Toward a relational understanding of AI in education</article-title>. <source>Learn. Media Technol.</source> <volume>45</volume>, <fpage>251</fpage>&#x2013;<lpage>269</lpage>. doi: <pub-id pub-id-type="doi">10.1080/17439884.2020.1686017</pub-id></citation></ref>
<ref id="ref15"><citation citation-type="other"><person-group person-group-type="author"><name><surname>Pu</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>C.</given-names></name></person-group>, <person-group person-group-type="author"><name><surname>Wu</surname> <given-names>W.</given-names></name></person-group> (<year>2020</year>). &#x201C;A deep reinforcement learning framework for instructional sequencing.&#x201D; <italic>2020 IEEE International Conference on Big Data (Big Data). IEEE</italic>, 5201&#x2013;5208.</citation></ref>
<ref id="ref16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qian</surname> <given-names>J.</given-names></name> <name><surname>Song</surname> <given-names>B.</given-names></name> <name><surname>Jin</surname> <given-names>Z.</given-names></name> <name><surname>Wang</surname> <given-names>B.</given-names></name> <name><surname>Chen</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>Linking empowering leadership to task performance, taking charge, and voice: the mediating role of feedback-seeking</article-title>. <source>Front. Psychol.</source> <volume>9</volume>:<fpage>2025</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2018.02025</pub-id></citation></ref>
<ref id="ref17"><citation citation-type="other"><person-group person-group-type="author"><name><surname>Ren</surname> <given-names>Y.</given-names></name> <name><surname>Huang</surname> <given-names>S.</given-names></name> <name><surname>Zhou</surname> <given-names>Y</given-names></name></person-group>. (<year>2021</year>). &#x201C;Deep learning and integrated learning for predicting student&#x2019;s withdrawal behavior in MOOC.&#x201D; <italic>2021 2nd International Conference on Education, Knowledge and Information Management (ICEKIM)</italic>. <italic>IEEE</italic>: 81&#x2013;84.</citation></ref>
<ref id="ref18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rogoza</surname> <given-names>R.</given-names></name> <name><surname>&#x017B;emojtel-Piotrowska</surname> <given-names>M.</given-names></name> <name><surname>Kwiatkowska</surname> <given-names>M. M.</given-names></name> <name><surname>Kwiatkowska</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). <article-title>The bright, the dark, and the blue face of narcissism: the Spectrum of narcissism in its relations to the metatraits of personality, self-esteem, and the nomological network of shyness, loneliness, and empathy</article-title>. <source>Front. Psychol.</source> <volume>9</volume>:<fpage>343</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2018.00343</pub-id>, PMID: <pub-id pub-id-type="pmid">29593627</pub-id></citation></ref>
<ref id="ref19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shahbazi</surname> <given-names>Z.</given-names></name> <name><surname>Byun</surname> <given-names>Y. C.</given-names></name></person-group> (<year>2022</year>). <article-title>Agent-based recommendation in E-learning environment using knowledge discovery and machine learning approaches</article-title>. <source>Mathematics</source> <volume>10</volume>:<fpage>1192</fpage>. doi: <pub-id pub-id-type="doi">10.3390/math10071192</pub-id></citation></ref>
<ref id="ref20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Suganya</surname> <given-names>G.</given-names></name> <name><surname>Premalatha</surname> <given-names>M.</given-names></name> <name><surname>Dubey</surname> <given-names>P.</given-names></name> <name><surname>Drolia</surname> <given-names>A. R.</given-names></name> <name><surname>Srihari</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Subjective areas of improvement: a personalized recommendation</article-title>. <source>Procedia Comput. Sci</source> <volume>172</volume>, <fpage>235</fpage>&#x2013;<lpage>239</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.procs.2020.05.037</pub-id></citation></ref>
<ref id="ref21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Troussas</surname> <given-names>C.</given-names></name> <name><surname>Giannakas</surname> <given-names>F.</given-names></name> <name><surname>Sgouropoulou</surname> <given-names>C.</given-names></name> <name><surname>Voyiatzis</surname> <given-names>I.</given-names></name></person-group> (<year>2020a</year>). <article-title>Collaborative activities recommendation based on students&#x2019; collaborative learning styles using ANN and WSM</article-title>. <source>Interact. Learn. Environ.</source> <volume>1</volume>, <fpage>1</fpage>&#x2013;<lpage>14</lpage>. doi: <pub-id pub-id-type="doi">10.1080/10494820.2020.1761835</pub-id></citation></ref>
<ref id="ref22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Troussas</surname> <given-names>C.</given-names></name> <name><surname>Krouska</surname> <given-names>A.</given-names></name> <name><surname>Sgouropoulou</surname> <given-names>C.</given-names></name></person-group> (<year>2020b</year>). <article-title>A novel teaching strategy through adaptive learning activities for computer programming</article-title>. <source>IEEE Trans. Educ.</source> <volume>64</volume>, <fpage>103</fpage>&#x2013;<lpage>109</lpage>. doi: <pub-id pub-id-type="doi">10.1109/te.2020.3012744</pub-id></citation></ref>
<ref id="ref23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>R.</given-names></name></person-group> (<year>2021</year>). <article-title>Exploration of data mining algorithms of an online learning behavior log based on cloud computing</article-title>. <source>Int. J. Contin. Eng. Educ. Life Long Learn.</source> <volume>31</volume>, <fpage>371</fpage>&#x2013;<lpage>380</lpage>. doi: <pub-id pub-id-type="doi">10.1504/IJCEELL.2021.116033</pub-id></citation></ref>
<ref id="ref24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>Y. J.</given-names></name> <name><surname>Liu</surname> <given-names>W. J.</given-names></name> <name><surname>Yuan</surname> <given-names>C. H.</given-names></name></person-group> (<year>2020</year>). <article-title>A mobile-based barrier-free service transportation platform for people with disabilities</article-title>. <source>Comput. Hum. Behav.</source> <volume>107</volume>:<fpage>105776</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.chb.2018.11.005</pub-id></citation></ref>
<ref id="ref25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>Y.</given-names></name> <name><surname>Song</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Gratifications for social media use in entrepreneurship courses: learners&#x2019; perspective</article-title>. <source>Front. Psychol.</source> <volume>10</volume>:<fpage>1270</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2019.01270</pub-id>, PMID: <pub-id pub-id-type="pmid">31214081</pub-id></citation></ref>
<ref id="ref26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>W.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Wu</surname> <given-names>Y. J.</given-names></name></person-group> (<year>2021</year>). <article-title>Internal and external networks, and incubatees&#x2019; performance in dynamic environments: entrepreneurial learning&#x2019;s mediating effect</article-title>. <source>J. Technol. Transf.</source> <volume>46</volume>, <fpage>1707</fpage>&#x2013;<lpage>1733</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10961-020-09790-w</pub-id></citation></ref>
<ref id="ref27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>W.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Zheng</surname> <given-names>C.</given-names></name> <name><surname>Wu</surname> <given-names>Y. J.</given-names></name></person-group> (<year>2019</year>). <article-title>Effect of narcissism, psychopathy, and machiavellianism on entrepreneurial intention&#x2014;the mediating of entrepreneurial self-efficacy</article-title>. <source>Front. Psychol.</source> <volume>10</volume>:<fpage>360</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2019.00360</pub-id>, PMID: <pub-id pub-id-type="pmid">30846958</pub-id></citation></ref>
<ref id="ref28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>Y. C. J.</given-names></name> <name><surname>Wu</surname> <given-names>T.</given-names></name></person-group> (<year>2017</year>). <article-title>A decade of entrepreneurship education in the Asia Pacific for future directions in theory and practice</article-title>. <source>Manag. Decis.</source> <volume>55</volume>, <fpage>1333</fpage>&#x2013;<lpage>1350</lpage>. doi: <pub-id pub-id-type="doi">10.1108/MD-05-2017-0518</pub-id></citation></ref>
<ref id="ref29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xiao</surname> <given-names>T.</given-names></name> <name><surname>Shen</surname> <given-names>H.</given-names></name></person-group> (<year>2019</year>). <article-title>Neural variational matrix factorization for collaborative filtering in recommendation systems</article-title>. <source>Appl. Intell.</source> <volume>49</volume>, <fpage>3558</fpage>&#x2013;<lpage>3569</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s10489-019-01469-6</pub-id></citation></ref>
<ref id="ref30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xiao</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>M.</given-names></name> <name><surname>Jiang</surname> <given-names>B.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name></person-group> (<year>2018</year>). <article-title>A personalized recommendation system with a combinational algorithm for online learning</article-title>. <source>J. Ambient. Intell. Humaniz. Comput.</source> <volume>9</volume>, <fpage>667</fpage>&#x2013;<lpage>677</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s12652-017-0466-8</pub-id></citation></ref>
<ref id="ref31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>You</surname> <given-names>Y.</given-names></name></person-group> (<year>2019</year>). <article-title>The seeming &#x2018;round trip&#x2019; of learner-centred education: a &#x2018;best practice&#x2019; derived from China&#x2019;s new curriculum reform?</article-title> <source>Comp. Educ.</source> <volume>55</volume>, <fpage>97</fpage>&#x2013;<lpage>115</lpage>. doi: <pub-id pub-id-type="doi">10.1080/03050068.2018.1541662</pub-id></citation></ref>
<ref id="ref32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yuan</surname> <given-names>C. H.</given-names></name> <name><surname>Wu</surname> <given-names>Y. J.</given-names></name></person-group> (<year>2020</year>). <article-title>Mobile instant messaging or face-to-face? Group interactions in cooperative simulations</article-title>. <source>Comput. Hum. Behav.</source> <volume>113</volume>:<fpage>106508</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.chb.2020.106508</pub-id></citation></ref>
<ref id="ref33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>W.</given-names></name> <name><surname>Wu</surname> <given-names>Y. C. J.</given-names></name> <name><surname>Chen</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>Business intelligence for patient-centeredness: a systematic review</article-title>. <source>Telematics Inform.</source> <volume>35</volume>, <fpage>665</fpage>&#x2013;<lpage>676</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.tele.2017.06.015</pub-id></citation></ref>
<ref id="ref001"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>D.</given-names></name> <name><surname>Liang</surname> <given-names>J.</given-names></name> <name><surname>Liu</surname> <given-names>G.</given-names></name> <name><surname>Yu</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>A dynamic personalized news recommendation system based on BAP user profiling method</article-title>. <source>IEEE Access</source> <volume>6</volume>, <fpage>41068</fpage>&#x2013;<lpage>41078</lpage>. doi: <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2858564</pub-id></citation></ref>
</ref-list>
</back>
</article>