<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Educ.</journal-id>
<journal-title>Frontiers in Education</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Educ.</abbrev-journal-title>
<issn pub-type="epub">2504-284X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/feduc.2023.1140272</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Education</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Eye tracking as feedback tool in physics teacher education</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Schweinberger</surname> <given-names>Matthias</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1537564/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Watzka</surname> <given-names>Bianca</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2166883/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Girwidz</surname> <given-names>Raimund</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Faculty for Physics, Ludwig-Maximilians-Universit&#x00E4;t M&#x00FC;nchen</institution>, <addr-line>M&#x00FC;nchen</addr-line>, <country>Germany</country></aff>
<aff id="aff2"><sup>2</sup><institution>Institute of Physics (IFP), Didactics of Physics, Otto-von-Guericke-Universit&#x00E4;t</institution>, <addr-line>Magdeburg</addr-line>, <country>Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Pascal Klein, University of G&#x00F6;ttingen, Germany</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Sebastian Becker, University of Cologne, Germany; Nicole Graulich, University of Giessen, Germany</p></fn>
<corresp id="c001">&#x002A;Correspondence: Matthias Schweinberger, <email>m.schweinberger1@lmu.de</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>30</day>
<month>05</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>8</volume>
<elocation-id>1140272</elocation-id>
<history>
<date date-type="received">
<day>08</day>
<month>01</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>05</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2023 Schweinberger, Watzka and Girwidz.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Schweinberger, Watzka and Girwidz</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>The ability to direct pupils&#x2019; attention to relevant information during the experimental process is relevant for all science teachers. The aim of this article is to investigate the effects of training the ability of prospective physics teachers to direct attention during the presentation of experiments with eye tracking visualizations of pupils&#x2019; visual attention as a feedback tool. Many eye tracking studies in the field of learning use eye movement recordings to investigate the effectiveness of an instructional design by varying cues or the presentation format. Another important line of research relates to study the teacher&#x2019;s gaze in a real classroom setting (mobile eye tracking). Here we use eye tracking in a new and innovative way: Eye tracking is used as a feedback tool for prospective teachers, showing them the effects of their verbal moderations when trying to direct their pupils&#x2019; attention. The study is based on a mixed methods approach and is designed as a single factor quasi-experiment with pre-post measurement. Pre- and post-test are identical. Prospective teachers record their verbal moderations on a &#x201C;silent&#x201D; experimental video. The quality of the moderation is rated by several independent physics educators. In addition, pupils&#x2019; eye movements while watching the videos are recorded using eye tracking. The resulting eye movements are used by the lecturer to give individual feedback to the prospective teachers, focusing on the ability to control attention in class. The effect of this eye tracking feedback on the prospective teachers is recorded in interviews. Between the pre-test and the post-test, the results show a significant improvement in the quality of the moderations of the videos. The results of the interviews show that the reason for this improvement is the perception of one&#x2019;s own impact on the pupils&#x2019; attention through eye tracking feedback. The overall training program of moderating &#x201C;silent videos&#x201D; including eye tracking as a feedback tool allows for targeted training of the verbal guidance of the pupils&#x2019; attention during the presentation of experiments.</p>
</abstract>
<kwd-group>
<kwd>silent videos</kwd>
<kwd>eye tracking</kwd>
<kwd>feedback</kwd>
<kwd>directing attention</kwd>
<kwd>self-perception</kwd>
<kwd>verbal cues</kwd>
</kwd-group>
<contract-sponsor id="cn001">Bundesministerium f&#x00FC;r Bildung und Forschung<named-content content-type="fundref-id">10.13039/501100002347</named-content></contract-sponsor>
<counts>
<fig-count count="9"/>
<table-count count="2"/>
<equation-count count="0"/>
<ref-count count="50"/>
<page-count count="15"/>
<word-count count="10913"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>STEM Education</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>1. Introduction</title>
<sec id="S1.SS1">
<title>1.1. Directing attention</title>
<sec id="S1.SS1.SSS1">
<title>1.1.1. Learning and the role of paying attention</title>
<p>The Cognitive Theory of Multimedia Learning (CTML) is based on three assumptions that are known as the cognitive principles of learning. The first principle is that multimedia learning takes place via a visual and an auditory processing channel. The second is that both channels have a limited capacity. Accordingly, learners can only process a certain amount of information per channel at the same time. The last is the principle of active learning. It states that learning takes place through cognitive processes (<xref ref-type="bibr" rid="B27">Mayer, 2014</xref>). <xref ref-type="bibr" rid="B27">Mayer (2014)</xref> identifies five cognitive processes: (1) selecting relevant words, (2) selecting relevant images, (3) organizing the selected words into a coherent verbal representation, (4) organizing the selected images into a coherent pictorial representation, and (5) integrating the pictorial and verbal representations and prior knowledge. The selection of words or images implies that learners pay attention to the information presented (<xref ref-type="bibr" rid="B27">Mayer, 2014</xref>). With regard to attention, a distinction is made between visual attention, auditory attention and other forms that are not relevant here (see <xref ref-type="bibr" rid="B4">Amso, 2016</xref>). With the help of &#x201C;silent videos,&#x201D; we want to investigate the ability of prospective teachers to direct pupils&#x2019; visual attention through speech. Therefore, only visual and auditory attentions are relevant.</p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>Visual Attention. <xref ref-type="bibr" rid="B25">Lockhofen and Mulert (2021)</xref> further specify the role of attention in the learning process. They define: &#x201C;<italic>Visual attention is the cognitive process that mediates the selection of important information from the environment.&#x201D;</italic> (<xref ref-type="bibr" rid="B25">Lockhofen and Mulert, 2021</xref>, p. 1).</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>Auditory Attention. <italic>&#x201C;It is well-known that stimulus-focused attention improves auditory performance by enabling one to process relevant stimuli more efficiently&#x201D;</italic> (<xref ref-type="bibr" rid="B10">Folyi et al., 2012</xref>, p. 1).</p>
</list-item>
</list>
<p>Another distinction is the trigger that activates attention. <xref ref-type="bibr" rid="B19">Katsuki and Constantinidis (2014)</xref> distinguish between bottom-up attention and top-down attention. Bottom-up attention is an externally induced process. The information to be processed is selected automatically. Top-down attention is an internally generated process. The information is actively sought out based on self-selected factors (<xref ref-type="bibr" rid="B19">Katsuki and Constantinidis, 2014</xref>). The reasons for the attention diversion are different for bottom-up and top-down. However, their effects are similar. In both attentional processes, objects are processed preferentially. In both cases, a stronger neural response follows, which can induce better storage in memory (<xref ref-type="bibr" rid="B35">Pinto et al., 2013</xref>).</p>
<p>Therefore, both forms of attention may be of interest for teaching. The bottom-up process is a <italic>stimulus-driven process</italic> (<xref ref-type="bibr" rid="B35">Pinto et al., 2013</xref>). So, it could be specifically triggered by signals or cues to direct visual or auditory attention to relevant information. The top-down process is influenced by prior knowledge (<xref ref-type="bibr" rid="B24">Lingzhu, 2003</xref>; <xref ref-type="bibr" rid="B26">Lupyan, 2017</xref>) or previous experience (<xref ref-type="bibr" rid="B1">Addleman and Jiang, 2019</xref>).</p>
</sec>
<sec id="S1.SS1.SSS2">
<title>1.1.2. Controlling attention through cueing</title>
<p>Cues are often defined as content-free information that is intended to direct attention and thus support cognitive processes (<xref ref-type="bibr" rid="B16">Hu and Zhang, 2021</xref>). Spotlights (<xref ref-type="bibr" rid="B9">de Koning et al., 2010</xref>; <xref ref-type="bibr" rid="B18">Jarodzka et al., 2013</xref>), color changes (e.g., <xref ref-type="bibr" rid="B34">Ozcelik et al., 2010</xref>) and arrows (<xref ref-type="bibr" rid="B22">Kriz and Hegarty, 2007</xref>; <xref ref-type="bibr" rid="B5">Boucheix and Lowe, 2010</xref>) are good ways of directing visual attention. However, cues often differ greatly, and not only in how they appear, when they appear, or what they look like. The classical categorization by modality (e.g., auditory or visual) does not do justice to this fact. One category that has received little attention so far is the question of the content richness of the cues (<xref ref-type="bibr" rid="B47">Watzka et al., 2021</xref>), which by definition should be absent. However, in many classic examples, such as the label, it is present. A label therefore has a different quality than a spotlight, which is only intended to direct attention. In this study, only verbal cues are used, which can be offered with or without content. For example, one can direct attention (&#x201C;Look to the left!&#x201D;), another can help with specific details (&#x201C;the wooden block is an opaque object&#x201D;).</p>
<p>In meta-analyses regarding different subject areas, <xref ref-type="bibr" rid="B36">Richter et al. (2016)</xref>, <xref ref-type="bibr" rid="B41">Schneider et al. (2018)</xref>, and <xref ref-type="bibr" rid="B3">Alpizar et al. (2020)</xref> confirm the positive effect of the cueing principle on learning especially for novices. The analysis of <xref ref-type="bibr" rid="B36">Richter et al. (2016)</xref> includes 27 studies. Their main finding is that cues have a positive effect on learning performance with small to medium effect sizes and that especially learners with low prior knowledge benefit from cues. The analysis by <xref ref-type="bibr" rid="B41">Schneider et al. (2018)</xref> includes 103 studies and also includes eye tracking data. In summary, they also confirm the beneficial effect of cueing on learning success. In addition, attentional cues with small to medium effect sizes seem to induce longer learning times in general and longer gaze durations on relevant information in particular (<xref ref-type="bibr" rid="B41">Schneider et al., 2018</xref>). The mean gaze duration can be attributed to the cognitive process of organizing the CTML (<xref ref-type="bibr" rid="B2">Alemdag and Cagiltay, 2018</xref>) and indicate the degree of mental effort (<xref ref-type="bibr" rid="B17">Jarodzka et al., 2015</xref>). <xref ref-type="bibr" rid="B34">Ozcelik et al. (2010)</xref> interpret long mean gaze duration as more demanding tasks and correspondingly higher mental effort. Cues lead to longer viewing of the information addressed by the cues in learning materials than in learning materials without cues (<xref ref-type="bibr" rid="B5">Boucheix and Lowe, 2010</xref>; <xref ref-type="bibr" rid="B34">Ozcelik et al., 2010</xref>; <xref ref-type="bibr" rid="B12">Glaser and Schwan, 2015</xref>; <xref ref-type="bibr" rid="B50">Xie et al., 2019</xref>).</p>
<p>In a predominantly image-based learning material such as videos, verbal cues in particular have a positive effect on visual attention and learning success (e.g., <xref ref-type="bibr" rid="B12">Glaser and Schwan, 2015</xref>). An explanation for the better suitability of spoken text compared to written text can be found in CTML (<xref ref-type="bibr" rid="B27">Mayer, 2014</xref>). Due to the limited capacity of the processing channels, it makes sense to use additional resources of the auditory processing channel and thus follow the modality principle (see section &#x201C;1.1.3. Modality principle and learning with experimentation videos&#x201D;).</p>
</sec>
<sec id="S1.SS1.SSS3">
<title>1.1.3. Modality principle and learning with experimentation videos</title>
<p>The modality principle generally means that it is beneficial for learning if the text which accompanies graphics is spoken instead of written. Among other things, the modality principle has a positive effect on visual attention, because there is no split-attention effect to worry about (<xref ref-type="bibr" rid="B40">Schmidt-Weigand et al., 2010</xref>). In a predominantly image-based learning material (e.g., videos), spoken texts (e.g., moderations) in particular have a positive effect on visual attention and learning (<xref ref-type="bibr" rid="B12">Glaser and Schwan, 2015</xref>).</p>
<p>In a meta-analysis comprising of 43 studies which cover a vast spectrum of subjects and visualizations, <xref ref-type="bibr" rid="B11">Ginns (2005)</xref> confirms the modality effect with a medium effect size and shows that learning materials with visualizations and spoken texts generally lead to better learning outcomes than learning materials with visualizations and written texts.</p>
<p>The beneficial learning impact of the modality effect is explained by a more effective use of working memory capacity (see section &#x201C;1.1.1. Learning and the role of paying attention&#x201D;). Accordingly, more cognitive resources can be used for processing the learning content and learning performance increases (<xref ref-type="bibr" rid="B45">Sweller et al., 2011</xref>). When demonstrating experiments in class, teachers automatically use their voice as their main tool of communication. They automatically give verbal cues, some of which are content-related (e.g., mentioning the function) and some of which control attention (e.g., mentioning a surface feature). The question is how do prospective teachers learn to control the attention of their pupils? This paper is about fostering prospective physics teachers to guide their pupils in selecting relevant information by controlling bottom-up visual attention during experimentation through verbal cues. The control of visual bottom-up attention in this study is done via the cueing principle (verbal cues) since this technique can be applied without effort to classroom practice when teachers present experiments. Support for prospective teachers provides a special feedback format, which is theoretically classified subsequently.</p>
</sec>
</sec>
<sec id="S1.SS2">
<title>1.2. Feedback</title>
<sec id="S1.SS2.SSS1">
<title>1.2.1. Definition and phases</title>
<p><italic>&#x201C;Feedback is information provided by an agent regarding aspects of one&#x2019;s performance or understanding&#x201D;</italic> (<xref ref-type="bibr" rid="B15">Hattie and Timperley, 2007</xref>, p.81). Focusing especially on learners, Shute defines feedback as <italic>&#x201C;information communicated to the learner that is intended to modify his or her thinking or behavior for the purpose of improving learning&#x201D;</italic> (<xref ref-type="bibr" rid="B43">Shute, 2008</xref>, p.153). Feedback thus shows the gap between the target and the current state and should enable the recipient to recognize and close this gap. In this study, the presentation of pupils&#x2019; gaze behavior is intended to provide feedback and to help prospective teachers become aware of their ability to control attention. The three classic feedback phases described in the literature occur, namely, (<xref ref-type="bibr" rid="B48">Wisniewski et al., 2020</xref>):</p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>&#x201C;Feed-up&#x201D; (comparison of the actual status with a target status) Students and teachers get information about the learning goals to be accomplished: By watching the gaze overlays of their first moderation (pre) the prospective teachers got information about how the pupils reacted to their moderation of the video.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>&#x201C;Feed-back&#x201D; (comparison of the actual state with a previous state) Students and teachers see, what they have achieved in relation to an expected standard or previous performance: By watching at the gaze overlay of their second try, the prospective teachers could see what they have achieved relative to their first performance.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>&#x201C;Feed-forward&#x201D; (explanation of the target state based on the actual state), Students, and teachers receive information that leads to an adaption of learning in the form of enhanced challenges: After analyzing both moderations, the prospective teachers became aware of the positive skills they should develop, and the mistakes they should avoid in the future.</p>
</list-item>
</list>
<p>In general, feedback is considered a very powerful tool. <xref ref-type="bibr" rid="B48">Wisniewski et al. (2020)</xref> obtain an average effect size of 0.48 in a meta-analysis. However, feedback does not <italic>per se</italic> lead to better learning outcomes. <xref ref-type="bibr" rid="B21">Kluger and DeNisi (1996)</xref> note that about one third of feedback results in negative learning effects. However, learning depends on a variety of different influences (<xref ref-type="bibr" rid="B14">Hattie, 2021</xref>), so there is no standardized way to use feedback. What helps one student today may not help another. Tomorrow, the same feedback may have the opposite effect or no effect at all (<xref ref-type="bibr" rid="B14">Hattie, 2021</xref>). How feedback is received depends not only on the form in which it is given, but also on a variety of factors about the recipient (<xref ref-type="bibr" rid="B43">Shute, 2008</xref>). For example, important factors are the recipient&#x2019;s self-assessment and experience of self-efficacy (<xref ref-type="bibr" rid="B43">Shute, 2008</xref>).</p>
</sec>
<sec id="S1.SS2.SSS2">
<title>1.2.2. Levels and forms of feedback</title>
<p>To understand the effectiveness of feedback, one must first be aware of the different levels that feedback addresses (<xref ref-type="bibr" rid="B15">Hattie and Timperley, 2007</xref>). Firstly, feedback works at the task level (FT). Is the answer on the task wrong, or right? Second, feedback addresses the process level (FP), i.e., information about the process, how to deal with the task and/or how to understand it. Thirdly, feedback works on the self-regulation level (FR), where the learner checks, controls, and self-regulates his or her processes and behavior. Finally, feedback also provides feedback on the so-called self-level (FS), where positive (and negative) expressions and evaluations about the learner are expressed (<xref ref-type="bibr" rid="B15">Hattie and Timperley, 2007</xref>). Eye tracking feedback on your own moderation should ideally trigger the task and process level.</p>
<p>The level of feedback addressed depends largely on the form in which it is given. Different authors distinguish between written, computer aided, oral, pictorial, etc., according to the medium, or according to the content, for example, formative tutorial (<xref ref-type="bibr" rid="B33">Narciss and Huth, 2006</xref>) or actionable (<xref ref-type="bibr" rid="B6">Cannon and Witherspoon, 2005</xref>). A detailed description of the different forms can be found in <xref ref-type="bibr" rid="B15">Hattie and Timperley (2007)</xref> and <xref ref-type="bibr" rid="B48">Wisniewski et al. (2020)</xref>. By watching the gaze overlays of individual moderated videos, we concentrate on a certain form of visual and auditive feedback (see section &#x201C;1.2.5. Eye tracking as feedback tool&#x201D; and section &#x201C;3.2.1. Pre-test, first eye tracking feedback and pre-interview&#x201D;).</p>
</sec>
<sec id="S1.SS2.SSS3">
<title>1.2.3. Feedback directions/student feedback</title>
<p>Much of the research describes forms and effects of feedback on the learner. Recently, feedback as feedback from the learner to the teacher has received more attention (<xref ref-type="bibr" rid="B38">Rollett et al., 2021</xref>). The focus here is on the question of the extent to which pupil feedback affects the quality of the teacher&#x2019;s teaching and thus improves the pupil&#x2019;s learning success. The question is to what extent pupil feedback is reliable and valid, but recent studies show that pupil feedback provides teachers with valid information about their teaching quality (<xref ref-type="bibr" rid="B38">Rollett et al., 2021</xref>). In this study, training with pupils&#x2019; gaze overlays should provide valid information for feedback, especially since the pupils provide this feedback without their own knowledge.</p>
<p><xref ref-type="bibr" rid="B37">R&#x00F6;hl et al. (2021)</xref> describe in the &#x201C;Process Model of Student Feedback on Teaching (SFT)&#x201D; a circuit diagram of how pupil feedback affects the teacher. The process begins with collecting and measuring pupil perceptions, which are then reported back to the teacher. The teacher interprets this feedback information, which stimulates cognitive but also affective reactions and processes in the teacher. This information can increase the teacher&#x2019;s knowledge about his teaching and thus trigger a development to improve his own teaching, so that in the following the learning success of the pupils can increase again. By giving the prospective teachers feedback information about their moderations we assume that the development will be triggered to better direct the attention of pupils.</p>
</sec>
<sec id="S1.SS2.SSS4">
<title>1.2.4. Eye tracking as feedback tool</title>
<p>The use of eye tracking as a feedback tool in education has recently been increasingly emphasized in various disciplines (e.g., <xref ref-type="bibr" rid="B8">Cullipher et al., 2018</xref>). Eye movement recordings have been used to analyze and optimize the effectiveness of the design of learning materials (<xref ref-type="bibr" rid="B23">Langner et al., 2022</xref>). <xref ref-type="bibr" rid="B32">Mussgnug et al. (2014)</xref> describe how eye tracking recordings as a teaching tool improve awareness of user experiences with designed objects and how these experiences can be implemented in design education. <xref ref-type="bibr" rid="B49">Xenos and Rigou (2019)</xref> outline the use of eye tracking data collected and analyzed to help students improve their design. In contrast to gaze data of other people looking at specific objects, the gaze of teachers in real classrooms has also been the subject of various studies (<xref ref-type="bibr" rid="B30">McIntyre et al., 2017</xref>; <xref ref-type="bibr" rid="B44">Stuermer et al., 2017</xref>; <xref ref-type="bibr" rid="B29">McIntyre and Foulsham, 2018</xref>; <xref ref-type="bibr" rid="B31">Minarikova et al., 2021</xref>). In addition to using the gaze data of others, one&#x2019;s own gaze can also be used as feedback (<xref ref-type="bibr" rid="B13">Hansen et al., 2019</xref>). <xref ref-type="bibr" rid="B46">Szulewski et al. (2019)</xref> investigated the effect of eye tracking feedback on emergency physicians during a simulated response exercise, presumably triggering self-reflection processes. <xref ref-type="bibr" rid="B20">Keller et al. (2022)</xref> examined the effect of eye tracking feedback on prospective teachers observing and commenting on their own gaze during a lesson they were teaching.</p>
<p>We use eye tracking in a different way, somewhere in between the above: Eye tracking is used as a feedback tool for prospective teachers, showing them the effects of their verbal moderations as they try to direct their pupils&#x2019; attention, as happens in the regular classroom.</p>
</sec>
</sec>
</sec>
<sec id="S2">
<title>2. Research question</title>
<p>Directing pupils&#x2019; attention during the presentation of an experiment is crucial to its success. Pupils need to look at the right time at the right place to make the important observations. External cues such as speech can influence visual attention (<xref ref-type="bibr" rid="B12">Glaser and Schwan, 2015</xref>; <xref ref-type="bibr" rid="B50">Xie et al., 2019</xref>; <xref ref-type="bibr" rid="B47">Watzka et al., 2021</xref>). The overall question is how to improve the competence of prospective teachers in moderating experiments in the classroom. Therefore, we used the method of moderating &#x201C;silent videos&#x201D; to train prospective teachers&#x2019; ability to control their pupils&#x2019; attention. The particular focus of this method is on verbal cues through spoken language during the presentation of a video. Based on the five cognitive processes (<xref ref-type="bibr" rid="B27">Mayer, 2014</xref>), one of the main objectives of an appropriate presentation is to allow pupils to make the necessary observations, among many other aspects (see section &#x201C;3.3.1. Assessment of prospective teachers&#x2019; competence in moderating experimental videos&#x201D;). To assess this process, the times when observation tasks are set and when pupils are explicitly given the opportunity to observe are summarized as &#x201C;pupil-activating time&#x201D;.</p>
<p>Eye tracking is often used to study how a stimulus affects a person&#x2019;s perception. Conversely, visualizations of eye tracking data can be used to draw conclusions about the observer&#x2019;s attention and the effectiveness of cues. By using eye tracking as a feedback tool, we tried to show prospective teachers the impact of their moderation of an experimental video on pupils, so that they in turn could draw consequences for further presentations. This leads to the following research questions.</p>
<list list-type="simple">
<list-item><p><italic>RQ: To what extent can training with eye-tracking visualizations of pupils&#x2019; visual attention improve prospective teachers&#x2019; guidance of pupils&#x2019; gaze?</italic> The following more detailed questions should be considered.</p>
</list-item>
<list-item><p><italic>RQ1: Does training with eye tracking feedback help prospective teachers explain the set-up of an experiment in a way that is adapted to pupils&#x2019; prior knowledge and cognitive and linguistic development?</italic></p>
</list-item>
<list-item><p><italic>RQ2: Does training with eye tracking feedback help prospective teachers to increase pupils&#x2019; activating time?</italic></p>
</list-item>
</list>
</sec>
<sec id="S3" sec-type="materials|methods">
<title>3. Materials and methods</title>
<sec id="S3.SS1">
<title>3.1. Participants</title>
<p>A subsample of 15 physics prospective teachers from a German (Bavarian) university was selected. They were on average 22.4 years old (SD = 3.4) and in the 5th semester. Of the participants two were female and 13 were male. All participants had heard the experimental physics lectures and an introductory lecture on physics education with a theoretical introduction to criteria for setting up and conducting experiments before the study. Thus, all students had the necessary content and pedagogical knowledge on the topic of the study.</p>
</sec>
<sec id="S3.SS2">
<title>3.2. Procedure and material</title>
<p>The study uses a pre/post-test design. Between the pre-test and the post-test, a training phase of several weeks took place for the moderation of demonstration experiments. &#x201C;Silent videos&#x201D; were used both in the pre- and post-test as a survey instrument and in the training as learning material. The overall process of the study is shown in <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>Procedure of the entire study.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="feduc-08-1140272-g001.tif"/>
</fig>
<sec id="S3.SS2.SSS1">
<title>3.2.1. Pre-test, first eye tracking feedback and pre-interview</title>
<p>As a pre-test the prospective teachers had to moderate a &#x201C;silent video&#x201D; about core shadows and semi shadows (see <xref ref-type="fig" rid="F2">Figure 2</xref>). The video is divided into two main parts: a static part showing the set-up for about 30 s and a dynamic part showing the execution of the experiment for about 60 s. The video shows a small opaque block that is illuminated by two sources from different angles. A white elongated rail serves as a screen on which the different kinds of shadow can be seen. Everything is recorded from the pupils&#x2019; perspective, and it is presented in real time. All activities are shown as they would normally be done in a live classroom demonstration. For further information<sup><xref ref-type="fn" rid="footnote1">1</xref></sup> about the training with &#x201C;silent videos&#x201D; (see <xref ref-type="bibr" rid="B42">Schweinberger and Girwidz, 2022</xref>).</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Screenshot of set-up and execution of the study&#x2019;s experiment. The method of &#x201C;silent videos&#x201D; is described in detail by <xref ref-type="bibr" rid="B42">Schweinberger and Girwidz (2022)</xref>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="feduc-08-1140272-g002.tif"/>
</fig>
<p>The task for the prospective teachers was to moderate the video appropriately for pupils in their first year of learning physics in junior high school. The prospective teachers were told to assume that their pupils had prior knowledge of the model of the rectilinear propagation of light and the appearance of cast shadows. The moderation of the videos was evaluated by four or five raters according to the criteria (see section &#x201C;3.3.1. Assessment of prospective teachers&#x2019; competence in moderating experimental videos&#x201D;).</p>
<p>In the next step, individual feedback was given to the prospective teachers. For the eye tracking feedback, each video moderated by the prospective teachers was shown to three randomly selected pupils of the 7th grade and their eye movements were recorded using eye tracking. The data from this tracking was used to create a single gaze overlay video, in which the three gaze overlays of the pupils were superimposed (see <xref ref-type="fig" rid="F3">Figure 3</xref>). When the three overlays are superimposed at the same time, it is easier to see the commonality of the pupils&#x2019; responses than when all three overlays are viewed in sequence.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>Video excerpt of the gaze overlay feedback to a prospective teacher&#x2019;s moderation of the shadow video. The colored dots are the gaze overlays of three different pupils watching and reacting one prospective teacher&#x2018;s moderation.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="feduc-08-1140272-g003.tif"/>
</fig>
<p>The feedback took the form of a short, written critique by the lecturer before the discussion of the gaze overlay video. The gaze overlays were used to illustrate to the prospective teacher the immediate consequences of the criticisms previously made. In an individual conversation the lecturer and every prospective teacher watched the video with the gaze overlays together and discussed the connection between moderation and the pupils&#x2019; reactions (e.g., did the pupils look to the area they should and how long did they stay). For this purpose, the gaze overlay feedback was stopped at important points to study certain situations more intensively. If necessary, the gaze overlay feedback was viewed several times.</p>
<p>Afterward the prospective teachers were interviewed for the first time by another research assistant. They were asked to what extent the feedback from the pupils&#x2019; views helped them to assess their own ability to manage their pupils&#x2019; attention (see section in details &#x201C;3.3.2. Interview and survey guide&#x201D;).</p>
</sec>
<sec id="S3.SS2.SSS2">
<title>3.2.2. Training phase</title>
<p>The prospective teachers moderated a total of six videos over a 10-week period later in the term to build up their skills. Like the pre-test, in the training phase the prospective teachers had to moderate &#x201C;silent videos&#x201D; of different experiments. To do this, they had to write their own script in advance, considering the criteria. In order to train as many facets of a moderation as possible, three criteria from the catalog (see list of criteria in the <xref ref-type="supplementary-material" rid="FS1">Supplementary Figure 1</xref>) were given by the lecturer for each training video. The main focus was on the development of the prospective teachers&#x2019; ability to direct pupils&#x2019; attention in a targeted way. Each of these moderations was analyzed individually in a small group discussion based on the three pre-defined criteria. In preparation for these discussion meetings, each prospective teacher received a brief written critique in advance. Afterward, they had to set up the respective experiment in the seminar and present it to their colleagues. Thus, the prospective teachers received verbal and written feedback from the lecturer and their student colleagues in the training phase, no further gaze overlays were shown (see section &#x201C;6. Limitations&#x201D;).</p>
</sec>
<sec id="S3.SS2.SSS3">
<title>3.2.3. Post-test, second eye tracking feedback and post-interview</title>
<p>After the training, at the end of the term, the prospective teachers had to moderate the first video about core shadows and semi shadows from the pre-test for a second time (post-test). The moderation of the videos in the post-test was also evaluated according to the criteria (see section &#x201C;3.3.1. Assessment of prospective teachers&#x2019; competence in moderating experimental videos&#x201D;) as in the pre-test.</p>
<p>To generate the second eye tracking feedback, the moderated videos from the post-test were again shown to three different pupils and their gazes were recorded. We decided to use different pupils than in the pre-test, because we expected quite a large repetition effect. The content was a very simple phenomenon, and we wanted all pupils to have the comparable prior knowledge. The resulting gaze overlay videos were produced as in the first feedback and shown to the prospective teachers visualizing a second short written critique. In addition, the prospective teachers watched the gaze overlay video of their first trial, to discuss the developments between the pre and post.</p>
<p>The prospective teachers were then interviewed a second time by another research assistant using the same questions as in the first interview.</p>
</sec>
</sec>
<sec id="S3.SS3">
<title>3.3. Assessment</title>
<sec id="S3.SS3.SSS1">
<title>3.3.1. Assessment of prospective teachers&#x2019; competence in moderating experimental videos</title>
<p>The criteria were developed over several years from practical experience and then discussed intensively by five physics lecturers from the chair of Physics Education at LMU Munich and two physics teacher trainers. The criteria are subject to constant further development. Due to the two different parts of the video (static set-up and dynamic execution), two evaluation schemes had to be developed (which also were explained to the prospective teachers).</p>
<p>In the set-up, each relevant object had to be described by three categories of the object in the experiment:</p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>the location (e.g., &#x201C;on the left side of the table&#x201D;),</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>two surface characteristics (e.g., &#x201C;brown, wooden block&#x201D;), and</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>the function (e.g., &#x201C;provides shade&#x201D;).</p>
</list-item>
</list>
<p>Reading from left to right results in three consecutive sequences: first the lamps, then the block, and finally the screen. For each of the relevant objects, the number of mentions was counted.</p>
<p>Different categories were chosen to assess the moderation of the execution of the experiments (see <xref ref-type="fig" rid="F4">Figure 4</xref>):</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>Coding scheme for moderating the execution of the experiment. The colors represent the categories.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="feduc-08-1140272-g004.tif"/>
</fig>
<p>The following procedure was used to assess the extent to which the moderations met the criteria. After adding the prospective teachers&#x2019; audio tracks to the &#x201C;silent videos,&#x201D; the different categories were localized in the timeline and marked by a corresponding, colored bar. The categories were: general information to the experiment (blue), description of action carried out (light blue), observation orders or questions (green), observation time (yellow), explanations (red), summary of the observations (orange) and time without content (no color), breathing time (purple). The length of the bars is proportional to the temporal length of these intervals. The codes are rated on whether they appear. Then the ratios of the corresponding intervals to the total length of the moderation of the execution were calculated. This was done for both moderations of pre and post-test (see <xref ref-type="fig" rid="F5">Figure 5</xref>).</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>Timeline with rating of a prospective teacher&#x2018;s moderation: at the top the video track, below it the audio track and again below sample codes for the assignment to the categories.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="feduc-08-1140272-g005.tif"/>
</fig>
<p>This, of course, raises the question of the relationship of each category to each other for optimal moderation. Discussions with various training teachers and lecturers led to the conclusion that it is difficult, if not impossible, to define an ideal moderation of an experiment. Different individual teaching styles and current classroom situations are too different. We limited ourselves to the general consensus that a good moderation must give the pupils the opportunity to make the necessary observations, i.e., the pupils must be activated to do these observations. Also, a good moderation should also not involve explanations, as these disrupt the observation process, deprive pupils of the opportunity to think through the process themselves, or become the content of subsequent lessons. To assess these pupil-activating segments, we added both observation time and observation orders to the one total pupil-activating time.</p>
</sec>
<sec id="S3.SS3.SSS2">
<title>3.3.2. Interview and survey guide</title>
<p>The interview and survey guide included 13 questions, the eight questions used for this research were the same in pre-and post-interviews. The prospective teachers&#x2019; ratings were recorded using single items in the form of a 4-point Likert scale (4: completely agree,&#x201D; &#x201C;3: agree,&#x201D; &#x201C;2: disagree,&#x201D; and &#x201C;1: completely disagree). To obtain detailed information about their specific experiences, an open-ended question about this item was added. The interview and survey guide contained questions about the effects of moderating &#x201C;silent videos&#x201D; and of getting eye tracking feedback on their personal learning process (see <xref ref-type="supplementary-material" rid="FS2">Supplementary Figure 2</xref>). They should describe how their skills in controlling attention in particular and in moderating the videos in general had changed (Q 2). They were also asked about the effects on their professional language (Q 3, 6) and the consequences for their own actions in experimentation (Q 7). Another important part of the interview questions was the prospective teachers&#x2019; experiences of eye tracking as a feedback tool. They were asked how they perceived the effectiveness of their facilitation on the pupils. A major question was how eye tracking showed the connection between guiding (tasks and questions) and the pupils&#x2019; attentional response (Q 9, 10, 11, and 12). Finally, the prospective teachers were asked how they rate their learning progress between the two measurement points concerning approach, controlling attention through using language in facilitating experiments (additional question in the second interview). All interviews were evaluated and analyzed by two independent persons.</p>
</sec>
</sec>
<sec id="S3.SS4">
<title>3.4. Eye tracking system</title>
<p>In this study, eye tracking was used as a feedback tool. It is therefore not a measurement tool to measure an outcome variable, instead it is a part of the intervention/training. The eye movements were recorded with an eye tracker. The system used was an Eye Follower from LC Technology. This system uses four cameras, two for tracking head motions, and two for tracking the eyes. The accuracy was less than 0.4&#x00B0; of visual angle. The distance of a participant to the monitor was between 55 and 65 cm. The video area has a resolution of 1920 &#x00D7; 1080 pixels and the resolution of the 24&#x2033; monitor is 1920 &#x00D7; 1200 pixels. The stimulus was enlarged to full monitor width and proportionally adjusted in height. The fixations and saccades were recorded at a sampling rate of 120 Hz and the discrimination between saccades and fixations was done by LC Fixation Detector (a dispersion-based algorithm: <xref ref-type="bibr" rid="B39">Salvucci and Goldberg, 2000</xref>).</p>
</sec>
<sec id="S3.SS5">
<title>3.5. Analysis</title>
<p>The moderations of the videos were rated by four to five independent raters based on the categories (see section &#x201C;3.3.1. Assessment of prospective teachers&#x2019; competence in moderating experimental videos&#x201D;). The raters marked the beginning and end of each category on the timeline of the videos and calculated the percentage of time. The interclass reliability coefficient (model: two-way mixed and type: absolute agreement) was used to determine the agreement of the raters.</p>
<p>Dependent samples <italic>t</italic>-tests were used to test whether the mean speaking times per category differed between the pre-test and the post-test. The Bonferroni correction was used to counteract the accumulation of alpha errors by performing each individual test at a reduced significance level. The significance level of individual tests is calculated as the global significance level to be maintained divided by the number of individual tests (4 tests, significance level &#x03B1; = 0.0125).</p>
<p>The interviews were analyzed using qualitative content analysis according to <xref ref-type="bibr" rid="B28">Mayring (2015)</xref>. We followed a descriptive approach, analyzing the texts with a deductively formulated category system. We recorded the occurrence of these categories in category frequencies. The resulting scale has an ordinal scale level, so the &#x201C;Cohen&#x2019;s Weighted Kappa&#x201D; coefficient was calculated for the raters&#x2019; agreement. We chose quadratic weights, where the distances between the raters are squared. This gives more weight to ratings that are far apart than to ratings that are close together.</p>
</sec>
</sec>
<sec id="S4" sec-type="results">
<title>4. Results</title>
<p>The moderation of the set-up was evaluated by four independent raters. The results of the rater agreement analyses show an agreement between the four raters of <italic>r</italic> = 0.799 [95% CI (0.686, 0.887)].</p>
<p>The moderation of the execution was evaluated by five independent raters. The value of the inter-rater correlation coefficient <italic>r</italic> = 0.993 shows a very high level of agreement between the five raters [95% CI (0.991, 0.994); see <xref ref-type="bibr" rid="B7">Cicchetti, 1994</xref>].</p>
<p>The interviews were evaluated by two independent raters. The results of the rater agreement analyses show an agreement between the two raters of &#x03BA; = 0. 694 and are just above the 5% significance level [&#x03B1; = 0.068; 95% CI (0.378, 1.010)].</p>
<p>The findings to answer the first research question, namely, whether eye tracking feedback helps prospective teachers to explain the experimental set-up in a way that is appropriate for pupils, are divided into a general and a specific part.</p>
<sec id="S4.SS1">
<title>4.1. Set-up: general results</title>
<p>Before looking at the individual objects of the set-up to answer RQ 1, we will examine the connection of the set-up with the previous knowledge and the subsequent execution. A total of 43% of the prospective teachers started their first moderation attempt with an introductory sentence about the topic of the upcoming experiment, with only two of them really connecting to the pupils&#x2019; prior knowledge. The number of participants who started with a reasonable introductory sentence increased to 55% in the post attempt. The number of participants moving from set-up to execution with a research question or hypothesis increased from 10 to 23%. In both cases, the low percentage indicates that the participants were not aware or did not become aware of the importance of the transition between set-up and execution of the experiment. With 93%, the overwhelming majority adhered to the reading direction (from left to right), with virtually all participants (except one) adhering to the reading direction in the post-trial when trying to direct the pupils&#x2019; attention (see <xref ref-type="table" rid="T1">Table 1</xref>).</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Percentage of items &#x201C;introductory sentence&#x201D;, &#x201C;hypothesis mentioned&#x201D; and &#x201C;reading direction adhered&#x201D; mentioned in the pre- and post-test.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Pre</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Post</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Introductory sentence</td>
<td valign="top" align="center">43.3%</td>
<td valign="top" align="center">55.0%</td>
</tr>
<tr>
<td valign="top" align="left">Hypothesis mentioned</td>
<td valign="top" align="center">10.0%</td>
<td valign="top" align="center">22.7%</td>
</tr>
<tr>
<td valign="top" align="left">Reading direction adhered</td>
<td valign="top" align="center">93.3%</td>
<td valign="top" align="center">86.6%</td>
</tr>
</tbody>
</table></table-wrap>
</sec>
<sec id="S4.SS2">
<title>4.2. Set-up: specific results</title>
<p>Since the introductory sentence, the link to prior knowledge and the reading direction do not directly influence the pupils&#x2019; visual attention to certain areas. Thus, there is no focusing effect on the observed gaze overlays; the pupils&#x2019; gazes move across the whole screen and become more focused as soon as the experimental set-up appears, and the prospective teachers start talking. This behavior of the pupils didn&#x2019;t change between the pre- and post-trial.</p>
<p>After the introductory sentence the numbers of mentions regarding an object are counted (e.g., location, function and two surface features, see section &#x201C;3.3.1. Assessment of prospective teachers&#x2019; competence in moderating experimental videos&#x201D;). The mentions for the lamps increased from 56 to 76% (<italic>t</italic> = &#x2212;4.636, <italic>p</italic> &#x003C; 0.001, Cohens&#x2019; |<italic>d|</italic> = 4.575, <italic>n</italic> = 15), those concerning the block from 50 to 71% (<italic>t</italic> = &#x2212;15.756, <italic>p</italic> &#x003C; 0.001, Cohens&#x2019; |<italic>d|</italic> = 2.926, <italic>n</italic> = 15) and the screen from 71 to 84% (<italic>t</italic> = &#x2212;9.1454, <italic>p</italic> &#x003C; 0.001, Cohens&#x2019; |<italic>d|</italic> = 1.698, <italic>n</italic> = 15). The number of mentions increased for all subjects (see <xref ref-type="fig" rid="F6">Figure 6</xref>).</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption><p>Percentage of the objects &#x201C;lamps, block and screen&#x201D; mentioned in the pre- and post-test.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="feduc-08-1140272-g006.tif"/>
</fig>
<sec id="S4.SS2.SSS1">
<title>4.2.1. Mentioning &#x201C;block&#x201D; (detailed)</title>
<p>A more detailed analysis&#x2013;here of the description of the opaque block in the light path&#x2014;provides further insights: Cues referring to the location of the block increased from 42 to 75% applicable mentions, while the description of the block&#x2019;s function in the experiment remained at about 33% (two participants who had mentioned the block&#x2019;s function in the first attempt didn&#x2019;t mention it in the second attempt.) Altogether, the function of the block seems to be too obvious for many prospective teachers to mention. In the post-attempt, all prospective teachers described the block with at least one surface feature, with the number of mentions increasing from 97 to 100%. A total of 77% of them mentioned also a second feature, up from 27% in the first trial (see <xref ref-type="fig" rid="F7">Figure 7</xref>).</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption><p>Percentage of the object &#x201C;block&#x201D; mentioned in detail in the pre- and post-test.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="feduc-08-1140272-g007.tif"/>
</fig>
</sec>
</sec>
<sec id="S4.SS3">
<title>4.3. Execution: specific results</title>
<p>A total of 60% of the prospective teachers did not ask any question or gave any observation order to the pupils in the first attempt. The same situation resulted for giving the pupils&#x2019; time for observations, where also 60% of the prospective teachers didn&#x2019;t leave any time to do so. In the second moderation&#x2013;after the training phase and the eye tracking feedback- all prospective teachers gave observation orders and time to do these tasks.</p>
<sec id="S4.SS3.SSS1">
<title>4.3.1. Activating time</title>
<p>Observation order time and observation time together result in the activating time. A total of 40% of the prospective teachers placed observation orders. The average order time increased from 2.5 to 13.7 s, which means an increase of time share from 4.3 to 18.9%. The same development can be seen in the amount of observation time given to the pupils. A total of 40% of the prospective teachers gave the pupils time for observations. The average observation time for all participants increased from 2.8 to 13.9 s, which means an increase of time share from 4.2 to 18.5%. Due to the high number of prospective teachers who did not give observation orders or time in the pre-trial, the <italic>SD</italic> is very high, so that the variance in response behavior is also large. The time span between prospective teachers activating pupils and non-activating is very large.</p>
<p>If we restrict ourselves to the participants who gave both observation order and observation time (<italic>n</italic> = 7), the following picture emerges:</p>
<p>The results of the dependent samples <italic>t</italic>-test show a significant difference with a high effect size between the mean percentage of activating time before and after moderation training with feedback [<italic>t</italic> = &#x2212;3.075, <italic>p</italic> = 0.033, 95% CI (&#x2212;29.21, &#x2212;7.61), Cohen&#x2019;s |<italic>d|</italic> = 15.77, <italic>n</italic> = 7]. After training with eye tracking feedback (<italic>M</italic> = 29.66%, <italic>SD</italic> = 11.29), subjects used significantly more pupil activating &#x201C;tools&#x201D; in their moderation than before training (<italic>M</italic> = 11.33%, <italic>SD</italic> = 11.13). Due to the small sample size, a bootstrapping procedure with 10.000 samples was applied.</p>
<p>For prospective teachers who gave both orders and observation time in the first trial, the average length of orders increased from 6.5 to 15.3 s while the observation time given increased from 6.8 to 14.5 s. The share of pupil activating time more than doubled after the training (see <xref ref-type="table" rid="T2">Table 2</xref> and <xref ref-type="fig" rid="F8">Figure 8</xref>).</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>Prospective teachers&#x2019; time share and average time (pre and post) for observation orders, observation time and activating time when moderating the execution of an experiment.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Time share pre</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Time share post</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Average time pre</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Average time post</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Observation orders</td>
<td valign="top" align="center">4.3%</td>
<td valign="top" align="center">18.9%</td>
<td valign="top" align="center">2.5 s</td>
<td valign="top" align="center">13.7 s</td>
</tr>
<tr>
<td valign="top" align="left">Observation time</td>
<td valign="top" align="center">4.2%</td>
<td valign="top" align="center">18.5%</td>
<td valign="top" align="center">2.8 s</td>
<td valign="top" align="center">13.9 s</td>
</tr>
<tr>
<td valign="top" align="left">Activating time</td>
<td valign="top" align="center">8.5%</td>
<td valign="top" align="center">37.4%</td>
<td valign="top" align="center">5.3 s</td>
<td valign="top" align="center">27.6 s</td>
</tr>
</tbody>
</table></table-wrap>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption><p>Increase of observation orders and observation time (pupils activating time) for prospective teachers who have given both in the first trial.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="feduc-08-1140272-g008.tif"/>
</fig>
<p>Overall, however, one of the most important findings is that all prospective teachers, regardless of what they did on the pre-trail, gave observation orders and observation time after the training with eye tracking feedback. The average percentage of pupils-activating time on the second trial was 34%, so that more than one-third of the execution time was used to activate the pupils.</p>
</sec>
<sec id="S4.SS3.SSS2">
<title>4.3.2. Decrease of other parameters</title>
<p>Part of the training concept is that no explanations should be given while conducting an experiment. Explanations are major part of the next step in the lesson, only observations should be made and recorded during the experiment. Nevertheless, the percentage of explanations given by the prospective teachers decreased from 10% of time to 8.3% of time, with the number of teachers giving explanations remaining the same. The prospective teachers did not seem to find this instruction meaningful.</p>
<p>The time spent on summaries also decreased from 42 to 30%, which seems to be a consequence of the fact that the prospective teachers were able to describe the essential content more precisely.</p>
<p>The descriptions of action followed the same trend as the summaries, falling from 19 to 15% of time, although the number of individual descriptions increased: The execution of the experiment is divided into three sequences (lamp1, lamp2, lamp1 and 2). In the post-trial 14 out of 15 prospective teachers gave concise and accurate action descriptions of these sequences. This was followed by observation tasks, with the timing of these messages much better aligned with the temporal sequence. The shorter duration of the action descriptions is again a consequence of the much more precise formulation of the descriptions.</p>
<p>Fortunately, the number of prospective teachers who temporarily left their pupils without any task dropped from eleven to five, i.e., by more than halved. The portion of time fell from 8.9 to 1.3% of the time, with only five instead of 11 prospective teachers leaving pupils without any instruction at all. All increases and decreases are shown in the following Sankey diagram (see <xref ref-type="fig" rid="F9">Figure 9</xref>).</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption><p>Sankey-diagram of all changes of the moderation of the execution of an experiment (all prospective teachers).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="feduc-08-1140272-g009.tif"/>
</fig>
</sec>
</sec>
<sec id="S4.SS4">
<title>4.4. Results of the interviews and surveys</title>
<p>To answer the research question 2 (RQ2) of whether training with eye tracking feedback helps the prospective teachers to increase their pupils&#x2019; activating time, we analyzed the interviews and surveys. We considered the following statements to evaluate and rate the interviews:</p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>Category 1: Awareness of own impact <italic>&#x201C;Eye tracking made me aware of my own impact on pupils.&#x201D;</italic></p>
</list-item>
</list>
<p>The Likert scale with the question &#x201C;<italic>Eye tracking made me aware of my own impact on pupils&#x201D;</italic> was answered in the pre-test with a mean <italic>M</italic> = 3.20 and <italic>SD</italic> = 0.75 and in the post-test with a mean <italic>M</italic> = 3.40 and <italic>SD</italic> = 0.49. This indicates that a large share of the prospective teachers showed high agreement with the statement and that this agreement even increased in the post-test. The decrease in SD shows that they even more agreed.</p>
<p>To evaluate the interviews regarding this category the following two key phrases were used: (1) &#x201C;see reaction of the pupils.&#x201D; and (2) &#x201C;see importance of orders.&#x201D;</p>
<p>Analyzing the interviews 63% of the participants fell into this category after the pre-trial, 70% after the post-trial. A comment of a prospective teacher (student_14) was: <italic>&#x201C;Eye tracking feedback is really good because you can just see how you&#x2019;re affecting the pupils. You&#x2019;re really doing something practical where you can directly see the consequences of your actions.&#x201D;</italic></p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>Category 2: Connection between control codes and pupils&#x2019; reaction &#x201C;<italic>Eye tracking made me realize the connection between control codes (such as assignments and questions) and the response of the pupils.&#x201D;</italic></p>
</list-item>
</list>
<p>The question &#x201C;<italic>Eye tracking made me realize the connection between control codes (such as assignments and questions) and the response of the students&#x201D;</italic> was answered in the pre-test with a mean <italic>M</italic> = 3.20 again and <italic>SD</italic> = 0.65 and in the post-test with a mean <italic>M</italic> = 3.73 and <italic>SD</italic> = 0.44. This indicates that a large share of the prospective teachers showed high agreement with the statement and that this agreement even increased in the post-test to very high agreement. The decrease in SD shows that they even more agreed. All prospective teachers of the study saw the pupils&#x2019; reaction on the control codes they applied.</p>
<p>To evaluate the interviews regarding this category the following three key phrases were used: (1) &#x201C;see the effect of the control codes,&#x201D; (2) &#x201C;see effect of the spoken word,&#x201D; (3) &#x201C;see where the pupils look to.&#x201D;</p>
<p>A total of 53% of the participants fell into this category after the pre-trial, 63% after the post-trial. &#x201C;<italic>You can clearly see where the children look during the experiment, especially how they react to instructions</italic>&#x201D;, was one of the prospective teachers&#x2019; comments (student_10).</p>
<list list-type="simple">
<list-item>
<label>&#x2022;</label>
<p>Category 3: Perceived difficulty in directing attention <italic>&#x201C;I found it easy to direct the attention of the pupils in a certain area of the experiment).&#x201D;</italic></p>
</list-item>
</list>
<p>The Likert scale with the question &#x201C;<italic>Eye tracking made me aware of my own impact on pupils</italic>&#x201D; was answered in the pre-test with a mean <italic>M</italic> = 2.20 and <italic>SD</italic> = 0.65. After the first interview it became clear that the prospective teachers were rather reserved about their ability to direct pupils&#x2019; attention. With a mean <italic>M</italic> = 2.87 and <italic>SD</italic> = 0.44 in the post-test it is obvious that the difficulties in directing the attention of the pupils decreased and the prospective teachers were in consensus about this development. However, agreement with this category lagged behind the others in all Likert-scored questions.</p>
<p>Responses that prospective teachers felt were important in directing pupils&#x2019; attention were used to evaluate this category, i.e., the following three key words were used: (1) &#x201C;location items,&#x201D; (2) &#x201C;&#x2026; surface features,&#x201D; (3) &#x201C;observation order.&#x201D;</p>
<p>According to the interviews 63% of the participants fell into this after the pre-trial, 80% after the post-trial. <italic>&#x201C;I have noticed which work orders help more,&#x201D;</italic> commented student_9.</p>
<p>Another interesting finding from the interviews with the prospective teachers and the examination of many moderated videos must be mentioned. A total of 75% of prospective teachers indicated that it is not only important to keep the pupils&#x2019; eyes on a particular area, but also to keep them there. So, it seems to be necessary to give the pupils after the order where to look at a second assignment so that the pupils&#x2019; gazes stay on that spot.</p>
<p>This result was not expected in this way and was not previously part of our considerations of attention-controlling moderation of experiments. Rather, it seems necessary to investigate this circumstance more closely.</p>
<p>Generally, the overwhelming majority of prospective teachers stated, that <italic>&#x201C;it was very enlightening and informative to see where the pupils were looking and how they reacted to the instructions&#x201D;</italic> (student_13).</p>
</sec>
</sec>
<sec id="S5" sec-type="discussion">
<title>5. Discussion</title>
<p>In this study, we investigated a training with eye tracking feedback to improve prospective teachers&#x2019; abilities of moderating experiments in class. To do this, we demonstrated to prospective teachers their ability to direct pupils&#x2019; attention using only verbal cues. We encouraged these skills through training and intensive feedback on their abilities. The three phases of feedback described in the literature (<xref ref-type="bibr" rid="B48">Wisniewski et al., 2020</xref>) could be realized in our approach: &#x201C;feed-up&#x201D; was realized by the prospective teachers watching the gaze overlay videos of their moderation, &#x201C;feed-back&#x201D; by comparing pre- and post-trial, and &#x201C;feed-forward&#x201D; by becoming aware which skills they should develop and which they should avoid. This feedback consisted of an assessment of the quality of the moderation (rating) and, in particular, the pupils&#x2019; reactions to the moderation (eye tracking). The success of this approach was measured by an assessment and through interviews.</p>
<p>Our results answering RQ 1 show that our approach significantly improves the ability to moderate experimental set-up through verbal cues. We could show that training with eye tracking feedback and rating feedback helped prospective teachers to explain the set-up of an experiment in a pupil&#x2019;s appropriate way. The number of categories mentioned by the prospective teachers increased for all three objects in the set-up. The prospective teachers rarely provided a second surface feature and often relied on the pupils&#x2019; presumed prior knowledge. It is interesting to note that although the prospective teachers are better at locating the individual objects and at the same time name a second surface feature much more frequently, there is no significant change in all three objects in terms of their function. The function of an object also hardly appears in the interviews. To the prospective teachers the function of an object seemed to be automatically supplied with the naming of the object or not worth naming it. The wooden block seemed to be of little concern in both trials, although compared to lamps and screens, the block&#x2019;s function as a shade provider is not natural. When asked why the block was given so little attention, reference was made to the corresponding preliminary experiment, although only two prospective teachers and then only in the second trail made a sufficient connection to the previous knowledge (in this case the creation of a simple shadow in the model of the rectilinear propagation of light). Overall, the prospective teachers had difficulty making transitions between the different phases of the experiment, with a total of only three (pre) or four (post) leading to execution with a research question or similar. With regard to the moderation of the execution of the experiments, the results show that the time in which pupils were given observation orders and got observation time more or less doubled, while all the other parameters approximately halved. Not only did the activating time increase, but the prospective teachers also paid much more attention to the respective sequences, so that the observation period corresponded much better to the action in the experiment. The time share spent explaining also decreased, although not as much as hoped. The prospective teachers also seemed to have difficulties to refrain from explanations during the presentation. But with the more intensive study of one&#x2019;s own linguistic guidance the prospective teachers&#x2019; moderations became steadily shorter and more concise in content. This was reflected in the decrease of complaints that the playtime of the videos was too short. Nevertheless, some moderations remained long-winded. However, the linguistic content analysis of the moderations is still pending.</p>
<p>Training with feedback through eye tracking and assessment resulted in a significant increase in pupil activating time (RQ2). Our results of the interviews and surveys show that training with eye tracking as a feedback tool has a high level of acceptance and perceived usefulness among the prospective teachers. In the interviews, the prospective teachers described, among other things, how eye tracking feedback made them aware of their previous abilities to accompany experiments linguistically. The direct feedback from the pupils set in motion a process that made them realize the value of a good description of the experimental set-up but also the possibilities of attention-grabbing work assignments. This feedback acted back on our prospective teachers as described in the Process Model of SFT (<xref ref-type="bibr" rid="B37">R&#x00F6;hl et al., 2021</xref>). Furthermore, the eye tracking feedback with the accompanying verbal analysis by the lecturer provided prospective teachers with information on how to improve their moderation skills. The interviews revealed the extent to which prospective teachers grappled with this information and developed individual instructional approaches (see section &#x201C;1.2.2. Levels and forms of feedback&#x201D;).</p>
<p>In the gaze overlay videos, when comparing pre- and post-moderation, one can clearly see the stronger focus of the pupils&#x2019; gaze and the longer stay in one area. Unfortunately, this effect cannot be statistically represented in our approach since we only had three pupils per prospective teacher available. Watching the gaze overlay videos of their own moderation showed the prospective teachers the gap between the target (directing pupils&#x2019; attention) and the current state.</p>
</sec>
<sec id="S6">
<title>6. Limitations</title>
<p>The results of the study should be interpreted with the following limitations. Firstly, the sample size is relatively small. Fifteen 5th semester prospective teachers participated in the study. At that time, they were all prospective teachers enrolled in that semester, at that location and on that course. Expanding the sample to include other locations or prospective teachers from other semesters might have led to biases in their prior knowledge or experience. In addition, the recorded video experiments of the pre-test and the post-test of all participating subjects were shown to three pupils each and their eye movements were recorded. The effort involved was already very high and would have increased massively with a larger sample. This was therefore not done. Secondly, the pupils&#x2019; eye movements show the participants how their attentional cues work on the pupils&#x2019; visual attention. Of course, they do not provide any information about what the pupils have actually learned. The changes in the pupils&#x2019; knowledge were not the purpose of this study, but the reactions of them to the verbal input of prospective teachers providing feedback.</p>
</sec>
<sec id="S7" sec-type="conclusion">
<title>7. Conclusion</title>
<p>Observing the gaze behavior of pupils watching a &#x201C;silent video &#x201C;moderated by prospective teachers themselves gives them authentic feedback on their own effectiveness. Prospective teachers can literally see the impact of their words, the reaction of the pupils listening to them. They see where the pupils are looking on and individually recognize when or why the pupils leave the currently important areas of the set-up. The most important achievement, however, is that all prospective teachers can directly see and experience their own individual learning progress to accompany experiments in an attention-activating verbal way. They can see how even small changes (e.g., giving a second surface feature or describing the function) in moderating an experiment can have a lasting impact on pupils&#x2019; attention.</p>
<p>The analysis of the connection between control codes (given as verbal attentional cues) and pupils&#x2019; response leads to another important result of the use of eye tracking: pupils follow the command to look at a particular area of the set-up immediately almost every time, but as quickly as they look, they leave it again. To keep their attention on the spot, it is necessary to give a second assignment or to describe another feature such as the function (given as verbal content-related cues) or another surface feature of an object. Pupils who have received at least two pieces of information or assignments stay longer on this area of the set-up. To stay longer on a certain spot is very necessary for the pupils to make the observation the teacher intended. Thus, the results show that one strength of verbal cues, namely, being able to offer attentional guidance and content support, should also be used. With the more intensive study of one&#x2019;s own linguistic guidance the prospective teachers&#x2019; moderations became steadily shorter and more concise in content. However, the linguistic content analysis of the moderations is still pending.</p>
<p>Demonstrating experiments in class in a way that is effective for learning requires a lot of practice. Training with &#x201C;silent videos&#x201D; is a promising method to support this exercise process, although it cannot replace real-life execution. It is not the intent of this training to standardize prospective teacher moderation, just as there is no ideal type of moderation, but rather everyone should develop their own individual appropriate teaching style. However, using eye tracking feedback gives prospective teachers unbiased and direct feedback from real pupils on their verbal skills and the impact of their use of language on their pupils. This study has so far analyzed only the group that received both the training and the eye tracking feedback. It is therefore unclear how much of the prospective teachers&#x2019; positive development can be attributed to the training or the eye tracking feedback. Further research should explore how much of the improvement in the video presentations can be explained by the eye tracking feedback factor and how much by the training factor.</p>
</sec>
<sec id="S8" sec-type="data-availability">
<title>Data availability statement</title>
<p>The original contributions presented in this study are included in the article/<xref ref-type="supplementary-material" rid="VS1">Supplementary material</xref>, further inquiries can be directed to the corresponding author.</p>
</sec>
<sec id="S9" sec-type="ethics-statement">
<title>Ethics statement</title>
<p>Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="S10" sec-type="author-contributions">
<title>Author contributions</title>
<p>MS, BW, and RG contributed to the conception and design of the study. MS organized the database and wrote the first draft of the manuscript. MS and BW performed the statistical analysis. BW wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.</p>
</sec>
</body>
<back>
<sec id="S11" sec-type="funding-information">
<title>Funding</title>
<p>This work &#x201C;silent videos&#x201D; of the Chair for Physics Education at LMU Munich is part of the project Lehrerbildung@LMU run by the Munich Center for Teacher Education, which was funded by the Federal Ministry of Education and Research under the funding code 01JA1810.</p>
</sec>
<sec id="S12" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="S13" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec id="S14" sec-type="supplementary-material">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/feduc.2023.1140272/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/feduc.2023.1140272/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Video_1.MP4" id="VS1" mimetype="video/mp4" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Image_1.tiff" id="FS1" mimetype="image/tiff" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Image_2.tiff" id="FS2" mimetype="image/tiff" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<fn-group>
<fn id="footnote1">
<label>1</label>
<p>For all &#x201C;silent videos&#x201D; see: <ext-link ext-link-type="uri" xlink:href="https://www.didaktik.physik.uni-muenchen.de/lehrerbildung/lehrerbildung_lmu/video/">https://www.didaktik.physik.uni-muenchen.de/lehrerbildung/lehrerbildung_lmu/video/</ext-link></p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Addleman</surname> <given-names>D. A.</given-names></name> <name><surname>Jiang</surname> <given-names>Y. V.</given-names></name></person-group> (<year>2019</year>). <article-title>Experience-driven auditory attention.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>23</volume> <fpage>927</fpage>&#x2013;<lpage>937</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2019.08.002</pub-id> <pub-id pub-id-type="pmid">31521482</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alemdag</surname> <given-names>E.</given-names></name> <name><surname>Cagiltay</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). <article-title>A systematic review of eye tracking research on multimedia learning.</article-title> <source><italic>Comput. Educ.</italic></source> <volume>125</volume> <fpage>413</fpage>&#x2013;<lpage>428</lpage>. <pub-id pub-id-type="doi">10.1016/j.compedu.2018.06.023</pub-id> <pub-id pub-id-type="pmid">25943601</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alpizar</surname> <given-names>D.</given-names></name> <name><surname>Adesope</surname> <given-names>O. O.</given-names></name> <name><surname>Wong</surname> <given-names>R. M.</given-names></name></person-group> (<year>2020</year>). <article-title>A meta-analysis of signaling principle in multimedia learning environments.</article-title> <source><italic>Educ. Tech. Res. Dev.</italic></source> <volume>68</volume> <fpage>2095</fpage>&#x2013;<lpage>2119</lpage>. <pub-id pub-id-type="doi">10.1007/s11423-020-09748-7</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amso</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <source><italic>Visual attention: Its role in memory and development</italic></source>. <publisher-loc>Washington, DC</publisher-loc>: <publisher-name>American Psychological Association</publisher-name>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://dcnlab.psychology.columbia.edu/sites/default/files/content/Visual-attention-Its-role-in-memory-and-development.pdf">https://dcnlab.psychology.columbia.edu/sites/default/files/content/Visual-attention-Its-role-in-memory-and-development.pdf</ext-link></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boucheix</surname> <given-names>J. M.</given-names></name> <name><surname>Lowe</surname> <given-names>R. K.</given-names></name></person-group> (<year>2010</year>). <article-title>An eye tracking comparison of external pointing cues and internal continuous cues in learning with complex animations.</article-title> <source><italic>Learn. Instr.</italic></source> <volume>20</volume> <fpage>123</fpage>&#x2013;<lpage>135</lpage>. <pub-id pub-id-type="doi">10.1016/j.learninstruc.2009.02.015</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cannon</surname> <given-names>M. D.</given-names></name> <name><surname>Witherspoon</surname> <given-names>R.</given-names></name></person-group> (<year>2005</year>). <article-title>Actionable feedback: Unlocking the power of learning and performance improvement.</article-title> <source><italic>Acad. Manage. Perspect.</italic></source> <volume>19</volume> <fpage>120</fpage>&#x2013;<lpage>134</lpage>. <pub-id pub-id-type="doi">10.5465/ame.2005.16965107</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cicchetti</surname> <given-names>D. V.</given-names></name></person-group> (<year>1994</year>). <article-title>Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology.</article-title> <source><italic>Psychol. Assess.</italic></source> <volume>6</volume>:<issue>284</issue>. <pub-id pub-id-type="doi">10.1037/1040-3590.6.4.284</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cullipher</surname> <given-names>S.</given-names></name> <name><surname>Hansen</surname> <given-names>S. J.</given-names></name> <name><surname>VandenPlas</surname> <given-names>J. R.</given-names></name></person-group> (<year>2018</year>). &#x201C;<article-title>Eye tracking as a research tool: An introduction</article-title>,&#x201D; in <source><italic>Eye tracking for the chemistry education researcher</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>VandenPlas</surname> <given-names>J. R.</given-names></name> <name><surname>Hansen</surname> <given-names>S. J. R.</given-names></name> <name><surname>Cullipher</surname> <given-names>S.</given-names></name></person-group> (<publisher-loc>Washington, DC</publisher-loc>: <publisher-name>American Chemical Society</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1021/bk-2018-1292.ch001</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>de Koning</surname> <given-names>B. B.</given-names></name> <name><surname>Tabbers</surname> <given-names>H. K.</given-names></name> <name><surname>Rikers</surname> <given-names>R. M.</given-names></name> <name><surname>Paas</surname> <given-names>F.</given-names></name></person-group> (<year>2010</year>). <article-title>Attention guidance in learning from a complex animation: Seeing is understanding?.</article-title> <source><italic>Learn. Instr.</italic></source> <volume>20</volume> <fpage>111</fpage>&#x2013;<lpage>122</lpage>. <pub-id pub-id-type="doi">10.1016/j.learninstruc.2009.02.010</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Folyi</surname> <given-names>T.</given-names></name> <name><surname>Feh&#x00E9;r</surname> <given-names>B.</given-names></name> <name><surname>Horv&#x00E1;th</surname> <given-names>J.</given-names></name></person-group> (<year>2012</year>). <article-title>Stimulus-focused attention speeds up auditory processing.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>84</volume> <fpage>155</fpage>&#x2013;<lpage>163</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2012.02.001</pub-id> <pub-id pub-id-type="pmid">22326595</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ginns</surname> <given-names>P.</given-names></name></person-group> (<year>2005</year>). <article-title>Meta-analysis of the modality effect.</article-title> <source><italic>Learn. Instr.</italic></source> <volume>15</volume> <fpage>313</fpage>&#x2013;<lpage>331</lpage>. <pub-id pub-id-type="doi">10.1016/j.learninstruc.2005.07.001</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Glaser</surname> <given-names>M.</given-names></name> <name><surname>Schwan</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Explaining pictures: How verbal cues influence processing of pictorial learning material.</article-title> <source><italic>J. Educ. Psychol.</italic></source> <volume>107</volume>:<issue>1006</issue>. <pub-id pub-id-type="doi">10.1037/edu0000044</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hansen</surname> <given-names>S. J. R.</given-names></name> <name><surname>Hu</surname> <given-names>B.</given-names></name> <name><surname>Riedlova</surname> <given-names>D.</given-names></name> <name><surname>Kelly</surname> <given-names>R. M.</given-names></name> <name><surname>Akaygun</surname> <given-names>S.</given-names></name> <name><surname>Villalta-Cerdas</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Critical consumption of chemistry visuals: Eye tracking structured variation and visual feedback of redox and precipitation reactions.</article-title> <source><italic>Chem. Educ. Res. Pract.</italic></source> <volume>20</volume> <fpage>837</fpage>&#x2013;<lpage>850</lpage>. <pub-id pub-id-type="doi">10.1039/C9RP00015A</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hattie</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). &#x201C;<article-title>Foreword</article-title>,&#x201D; in <source><italic>Student feedback on teaching in schools</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Rollett</surname> <given-names>W.</given-names></name> <name><surname>Bijlsma</surname> <given-names>H.</given-names></name> <name><surname>R&#x00F6;hl</surname> <given-names>S.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>7</lpage>.</citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hattie</surname> <given-names>J.</given-names></name> <name><surname>Timperley</surname> <given-names>H.</given-names></name></person-group> (<year>2007</year>). <article-title>The power of feedback.</article-title> <source><italic>Rev. Educ. Res.</italic></source> <volume>77</volume> <fpage>81</fpage>&#x2013;<lpage>112</lpage>. <pub-id pub-id-type="doi">10.3102/003465430298487</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). <article-title>The effect of cue labeling in multimedia learning: Evidence from eye tracking.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>12</volume>:<issue>736922</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2021.736922</pub-id> <pub-id pub-id-type="pmid">34975627</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jarodzka</surname> <given-names>H.</given-names></name> <name><surname>Janssen</surname> <given-names>N.</given-names></name> <name><surname>Kirschner</surname> <given-names>P. A.</given-names></name> <name><surname>Erkens</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>Avoiding split attention in computer-based testing: Is neglecting additional information facilitative?.</article-title> <source><italic>Br. J. Educ. Technol.</italic></source> <volume>46</volume> <fpage>803</fpage>&#x2013;<lpage>817</lpage>. <pub-id pub-id-type="doi">10.1111/bjet.12174</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jarodzka</surname> <given-names>H.</given-names></name> <name><surname>Van Gog</surname> <given-names>T.</given-names></name> <name><surname>Dorr</surname> <given-names>M.</given-names></name> <name><surname>Scheiter</surname> <given-names>K.</given-names></name> <name><surname>Gerjets</surname> <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>Learning to see: Guiding students&#x2019; attention via a model&#x2019;s eye movements fosters learning.</article-title> <source><italic>Learn. Instr.</italic></source> <volume>25</volume> <fpage>62</fpage>&#x2013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1016/j.learninstruc.2012.11.004</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Katsuki</surname> <given-names>F.</given-names></name> <name><surname>Constantinidis</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). <article-title>Bottom-up and top-down attention: Different processes and overlapping neural systems.</article-title> <source><italic>Neuroscientist</italic></source> <volume>20</volume> <fpage>509</fpage>&#x2013;<lpage>521</lpage>. <pub-id pub-id-type="doi">10.1177/1073858413514136</pub-id> <pub-id pub-id-type="pmid">24362813</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Keller</surname> <given-names>L.</given-names></name> <name><surname>Cortina</surname> <given-names>K. S.</given-names></name> <name><surname>M&#x00FC;ller</surname> <given-names>K.</given-names></name> <name><surname>Miller</surname> <given-names>K. F.</given-names></name></person-group> (<year>2022</year>). <article-title>Noticing and weighing alternatives in the reflection of regular classroom teaching: Evidence of expertise using mobile eye tracking.</article-title> <source><italic>Instr. Sci.</italic></source> <volume>50</volume> <fpage>251</fpage>&#x2013;<lpage>272</lpage>. <pub-id pub-id-type="doi">10.1007/s11251-021-09570-5</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kluger</surname> <given-names>A. N.</given-names></name> <name><surname>DeNisi</surname> <given-names>A.</given-names></name></person-group> (<year>1996</year>). <article-title>The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory.</article-title> <source><italic>Psychol. Bull.</italic></source> <volume>119</volume>:<issue>254</issue>. <pub-id pub-id-type="doi">10.1037/0033-2909.119.2.254</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kriz</surname> <given-names>S.</given-names></name> <name><surname>Hegarty</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Top-down and bottom-up influences on learning from animations.</article-title> <source><italic>Int. J. Hum. Comput. Stud.</italic></source> <volume>65</volume> <fpage>911</fpage>&#x2013;<lpage>930</lpage>.</citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Langner</surname> <given-names>A.</given-names></name> <name><surname>Graulich</surname> <given-names>N.</given-names></name> <name><surname>Nied</surname> <given-names>M.</given-names></name></person-group> (<year>2022</year>). <article-title>Eye Tracking as a Promising Tool in Pre-Service Teacher Education- A New Approach to Promote Skills for Digital Multimedia Design.</article-title> <source><italic>J. Chem. Educ.</italic></source> <volume>99</volume> <fpage>1651</fpage>&#x2013;<lpage>1659</lpage>. <pub-id pub-id-type="doi">10.1021/acs.jchemed.1c01122</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lingzhu</surname> <given-names>J.</given-names></name></person-group> (<year>2003</year>). <article-title>Listening activities for effective top-down processing.</article-title> <source><italic>Internet TESL J.</italic></source> <volume>9</volume>.</citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lockhofen</surname> <given-names>D. E. L.</given-names></name> <name><surname>Mulert</surname> <given-names>C.</given-names></name></person-group> (<year>2021</year>). <article-title>Neurochemistry of Visual Attention.</article-title> <source><italic>Front. Neurosci.</italic></source> <volume>15</volume>:<issue>643597</issue>. <pub-id pub-id-type="doi">10.3389/fnins.2021.643597</pub-id> <pub-id pub-id-type="pmid">34025339</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lupyan</surname> <given-names>G.</given-names></name></person-group> (<year>2017</year>). <article-title>Changing what you see by changing what you know: The role of attention.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>8</volume>:<issue>553</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2017.00553</pub-id> <pub-id pub-id-type="pmid">28507524</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mayer</surname> <given-names>R. E.</given-names></name></person-group> (<year>2014</year>). &#x201C;<article-title>Cognitive theory of multimedia learning</article-title>,&#x201D; in <source><italic>The Cambridge handbook of multimedia learning</italic></source>, <role>ed.</role> <person-group person-group-type="editor"><name><surname>Mayer</surname> <given-names>R. E.</given-names></name></person-group> (<publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>), <fpage>43</fpage>&#x2013;<lpage>71</lpage>. <pub-id pub-id-type="doi">10.1017/CBO9781139547369.005</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mayring</surname> <given-names>P.</given-names></name></person-group> (<year>2015</year>). &#x201C;<article-title>Qualitative content analysis: theoretical background and procedures</article-title>,&#x201D; in <source><italic>Approaches to qualitative research in mathematics education. Advances in mathematics education</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Bikner-Ahsbahs</surname> <given-names>A.</given-names></name> <name><surname>Knipping</surname> <given-names>C.</given-names></name> <name><surname>Presmeg</surname> <given-names>N.</given-names></name></person-group> (<publisher-loc>Dordrecht</publisher-loc>: <publisher-name>Springer</publisher-name>). Available online at: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/978-94-017-9181-6_13">https://doi.org/10.1007/978-94-017-9181-6_13</ext-link></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McIntyre</surname> <given-names>N. A.</given-names></name> <name><surname>Foulsham</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Scanpath analysis of expertise and culture in teacher gaze in real-world classrooms.</article-title> <source><italic>Instr. Sci.</italic></source> <volume>46</volume> <fpage>435</fpage>&#x2013;<lpage>455</lpage>. <pub-id pub-id-type="doi">10.1007/s11251-017-9445-x</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McIntyre</surname> <given-names>N. A.</given-names></name> <name><surname>Mainhard</surname> <given-names>M. T.</given-names></name> <name><surname>Klassen</surname> <given-names>R. M.</given-names></name></person-group> (<year>2017</year>). <article-title>Are you looking to teach? Cultural, temporal and dynamic insights into expert teacher gaze.</article-title> <source><italic>Learn. Instr.</italic></source> <volume>49</volume> <fpage>41</fpage>&#x2013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1016/j.learninstruc.2016.12.005</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Minarikova</surname> <given-names>E.</given-names></name> <name><surname>Smidekova</surname> <given-names>Z.</given-names></name> <name><surname>Janik</surname> <given-names>M.</given-names></name> <name><surname>Holmqvist</surname> <given-names>K.</given-names></name></person-group> (<year>2021</year>). <article-title>Teachers&#x2019; professional vision: Teachers&#x2019; gaze during the act of teaching and after the event.</article-title> <source><italic>Front. Educ.</italic></source> <volume>6</volume>:<issue>716579</issue>. <pub-id pub-id-type="doi">10.3389/feduc.2021.716579</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mussgnug</surname> <given-names>M.</given-names></name> <name><surname>Lohmeyer</surname> <given-names>Q.</given-names></name> <name><surname>Meboldt</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). &#x201C;<article-title>Raising designers&#x2019; awareness of user experience by mobile eye tracking records</article-title>,&#x201D; in <source><italic>DS 78: Proceedings of the 16th International conference on engineering and product design education (E&#x0026;PDE14), design education and human technology relations</italic></source> (<publisher-loc>The Netherlands</publisher-loc>: <publisher-name>University of Twente</publisher-name>).</citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Narciss</surname> <given-names>S.</given-names></name> <name><surname>Huth</surname> <given-names>K.</given-names></name></person-group> (<year>2006</year>). <article-title>Fostering achievement and motivation with bug-related tutoring feedback in a computer-based training for written subtraction.</article-title> <source><italic>Learn. Instr.</italic></source> <volume>16</volume> <fpage>310</fpage>&#x2013;<lpage>322</lpage>. <pub-id pub-id-type="doi">10.1016/j.learninstruc.2006.07.003</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ozcelik</surname> <given-names>E.</given-names></name> <name><surname>Arslan-Ari</surname> <given-names>I.</given-names></name> <name><surname>Cagiltay</surname> <given-names>K.</given-names></name></person-group> (<year>2010</year>). <article-title>Why does signaling enhance multimedia learning? Evidence from eye movements.</article-title> <source><italic>Comput. Hum. Behav.</italic></source> <volume>26</volume> <fpage>110</fpage>&#x2013;<lpage>117</lpage>. <pub-id pub-id-type="doi">10.1016/j.chb.2009.09.001</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pinto</surname> <given-names>Y.</given-names></name> <name><surname>van der Leij</surname> <given-names>A. R.</given-names></name> <name><surname>Sligte</surname> <given-names>I. G.</given-names></name> <name><surname>Lamme</surname> <given-names>V. A. F.</given-names></name> <name><surname>Scholte</surname> <given-names>H. S.</given-names></name></person-group> (<year>2013</year>). <article-title>Bottom-up and top-down attention are independent.</article-title> <source><italic>J. Vis.</italic></source> <volume>13</volume>:<issue>16</issue>. <pub-id pub-id-type="doi">10.1167/13.3.16</pub-id> <pub-id pub-id-type="pmid">23863334</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Richter</surname> <given-names>J.</given-names></name> <name><surname>Scheiter</surname> <given-names>K.</given-names></name> <name><surname>Eitel</surname> <given-names>A.</given-names></name></person-group> (<year>2016</year>). <article-title>Signaling text-picture relations in multimedia learning: A comprehensive meta-analysis.</article-title> <source><italic>Educ. Res. Rev.</italic></source> <volume>17</volume> <fpage>19</fpage>&#x2013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1016/j.edurev.2015.12.003</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>R&#x00F6;hl</surname> <given-names>S.</given-names></name> <name><surname>Bijlsma</surname> <given-names>H.</given-names></name> <name><surname>Rollett</surname> <given-names>W.</given-names></name></person-group> (<year>2021</year>). &#x201C;<article-title>The process model of student feedback on teaching (SFT): A theoretical framework and introductory remarks</article-title>,&#x201D; in <source><italic>Student feedback on teaching in schools</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Rollett</surname> <given-names>W.</given-names></name> <name><surname>Bijlsma</surname> <given-names>H.</given-names></name> <name><surname>R&#x00F6;hl</surname> <given-names>S.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>). <pub-id pub-id-type="doi">10.1007/978-3-030-75150-0_1</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rollett</surname> <given-names>W.</given-names></name> <name><surname>Bijlsma</surname> <given-names>H.</given-names></name> <name><surname>R&#x00F6;hl</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). &#x201C;<article-title>Student feedback on teaching in schools: Current state of research and future perspectives</article-title>,&#x201D; in <source><italic>Student feedback on teaching in schools</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Rollett</surname> <given-names>W.</given-names></name> <name><surname>Bijlsma</surname> <given-names>H.</given-names></name> <name><surname>R&#x00F6;hl</surname> <given-names>S.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>259</fpage>&#x2013;<lpage>270</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-75150-0_16</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salvucci</surname> <given-names>D. D.</given-names></name> <name><surname>Goldberg</surname> <given-names>J. H.</given-names></name></person-group> (<year>2000</year>). &#x201C;<article-title>Identifying fixations and saccades in eye-tracking protocols</article-title>,&#x201D; in <source><italic>Proceedings of the 2000 symposium on eye tracking research &#x0026; applications</italic></source>, (<publisher-loc>Palm Beach Gardens, FL</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>71</fpage>&#x2013;<lpage>78</lpage>. <pub-id pub-id-type="doi">10.1145/355017.355028</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schmidt-Weigand</surname> <given-names>F.</given-names></name> <name><surname>Kohnert</surname> <given-names>A.</given-names></name> <name><surname>Glowalla</surname> <given-names>U.</given-names></name></person-group> (<year>2010</year>). <article-title>A closer look at split visual attention in system- and self-paced instruction in multimedia learning.</article-title> <source><italic>Learn. Instr.</italic></source> <volume>20</volume> <fpage>100</fpage>&#x2013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1016/j.learninstruc.2009.02.011</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schneider</surname> <given-names>S.</given-names></name> <name><surname>Beege</surname> <given-names>M.</given-names></name> <name><surname>Nebel</surname> <given-names>S.</given-names></name> <name><surname>Rey</surname> <given-names>G. D.</given-names></name></person-group> (<year>2018</year>). <article-title>A meta-analysis of how signaling affects learning with media.</article-title> <source><italic>Educ. Res. Rev.</italic></source> <volume>23</volume> <fpage>1</fpage>&#x2013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1016/j.edurev.2017.11.001</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schweinberger</surname> <given-names>M.</given-names></name> <name><surname>Girwidz</surname> <given-names>R.</given-names></name></person-group> (<year>2022</year>). &#x201C;<article-title>Silent videoclips&#x2019; for teacher enhancement and physics in class&#x2014;material and training wheels</article-title>,&#x201D; in <source><italic>Physics teacher education</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Borg Marks</surname> <given-names>J.</given-names></name> <name><surname>Galea</surname> <given-names>P.</given-names></name> <name><surname>Gatt</surname> <given-names>S.</given-names></name> <name><surname>Sands</surname> <given-names>D.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>149</fpage>&#x2013;<lpage>159</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-031-06193-6_11</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shute</surname> <given-names>V. J.</given-names></name></person-group> (<year>2008</year>). <article-title>Focus on formative feedback.</article-title> <source><italic>Rev. Educ. Res.</italic></source> <volume>78</volume> <fpage>153</fpage>&#x2013;<lpage>189</lpage>. <pub-id pub-id-type="doi">10.3102/0034654307313795</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stuermer</surname> <given-names>K.</given-names></name> <name><surname>Seidel</surname> <given-names>T.</given-names></name> <name><surname>Mueller</surname> <given-names>K.</given-names></name> <name><surname>H&#x00E4;usler</surname> <given-names>J.</given-names></name> <name><surname>Cortina</surname> <given-names>K. S.</given-names></name></person-group> (<year>2017</year>). <article-title>What is in the eye of preservice teachers while instructing? An eye-tracking study about attention processes in different teaching situations.</article-title> <source><italic>Zeitschrift f&#x00FC;r Erziehungswissenschaft</italic></source> <volume>20</volume> <fpage>75</fpage>&#x2013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1007/s11618-017-0731-9</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sweller</surname> <given-names>J.</given-names></name> <name><surname>Ayres</surname> <given-names>P.</given-names></name> <name><surname>Kalyuga</surname> <given-names>S.</given-names></name></person-group> (<year>2011</year>). <source><italic>Cognitive load theory.</italic></source> <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-1-4419-8126-4</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Szulewski</surname> <given-names>A.</given-names></name> <name><surname>Braund</surname> <given-names>H.</given-names></name> <name><surname>Egan</surname> <given-names>R.</given-names></name> <name><surname>Gegenfurtner</surname> <given-names>A.</given-names></name> <name><surname>Hall</surname> <given-names>A. K.</given-names></name> <name><surname>Howes</surname> <given-names>D.</given-names></name><etal/></person-group> (<year>2019</year>). <article-title>Starting to think like an expert: An analysis of resident cognitive processes during simulation-based resuscitation examinations.</article-title> <source><italic>Ann. Emerg. Med.</italic></source> <volume>74</volume> <fpage>647</fpage>&#x2013;<lpage>659</lpage>. <pub-id pub-id-type="doi">10.1016/j.annemergmed.2019.04.002</pub-id> <pub-id pub-id-type="pmid">31080034</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Watzka</surname> <given-names>B.</given-names></name> <name><surname>Hoyer</surname> <given-names>C.</given-names></name> <name><surname>Ertl</surname> <given-names>B.</given-names></name> <name><surname>Girwidz</surname> <given-names>R.</given-names></name></person-group> (<year>2021</year>). <article-title>Wirkung visueller und auditiver Hinweise auf die visuelle Aufmerksamkeit und Lernergebnisse beim Einsatz physikalischer Lern-videos.</article-title> <source><italic>Unterrichtswissenschaft</italic></source> <volume>49</volume> <fpage>627</fpage>&#x2013;<lpage>652</lpage>. <pub-id pub-id-type="doi">10.1007/s42010-021-00118-7</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wisniewski</surname> <given-names>B.</given-names></name> <name><surname>Zierer</surname> <given-names>K.</given-names></name> <name><surname>Hattie</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>The power of feedback revisited: A meta-analysis of educational feedback research.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>10</volume>:<issue>3087</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2019.03087</pub-id> <pub-id pub-id-type="pmid">32038429</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xenos</surname> <given-names>M.</given-names></name> <name><surname>Rigou</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>Teaching HCI design in a flipped learning m. sc. course using eye tracking peer evaluation data.</article-title> <source><italic>arXiv</italic></source> [<comment>preprint</comment>].</citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xie</surname> <given-names>H.</given-names></name> <name><surname>Mayer</surname> <given-names>R. E.</given-names></name> <name><surname>Wang</surname> <given-names>F.</given-names></name> <name><surname>Zhou</surname> <given-names>Z.</given-names></name></person-group> (<year>2019</year>). <article-title>Coordinating visual and auditory cueing in multimedia learning.</article-title> <source><italic>J. Educ. Psychol.</italic></source> <volume>111</volume>:<issue>235</issue>. <pub-id pub-id-type="doi">10.1037/edu0000285</pub-id></citation></ref>
</ref-list>
</back>
</article>
