<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2021.774961</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Assessing Deception in Questionnaire Surveys With Eye-Tracking</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Fang</surname> <given-names>Xinyue</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x2020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1437494/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Sun</surname> <given-names>Yiteng</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x2020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1476633/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Zheng</surname> <given-names>Xinyi</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Wang</surname> <given-names>Xinrong</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1545622/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Deng</surname> <given-names>Xuemei</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Wang</surname> <given-names>Mei</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>School of Mechanical Engineering, Sichuan University</institution>, <addr-line>Chengdu</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>School of Design, South China University of Technology</institution>, <addr-line>Guangzhou</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Li Liu, Beijing Normal University, China</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Lijun Yin, Sun Yat-sen University, China; Wei Zhou, Capital Normal University, China</p></fn>
<corresp id="c001">&#x002A;Correspondence: Mei Wang, <email>wangmei@scu.edu.cn</email></corresp>
<fn fn-type="equal" id="fn002"><p><sup>&#x2020;</sup>These authors have contributed equally to this work and share first authorship</p></fn>
<fn fn-type="other" id="fn004"><p>This article was submitted to Cognitive Science, a section of the journal Frontiers in Psychology</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>22</day>
<month>11</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>12</volume>
<elocation-id>774961</elocation-id>
<history>
<date date-type="received">
<day>13</day>
<month>09</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>10</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2021 Fang, Sun, Zheng, Wang, Deng and Wang.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Fang, Sun, Zheng, Wang, Deng and Wang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Deceit often occurs in questionnaire surveys, which leads to the misreporting of data and poor reliability. The purpose of this study is to explore whether eye-tracking could contribute to the detection of deception in questionnaire surveys, and whether the eye behaviors that appeared in instructed lying still exist in spontaneous lying. Two studies were conducted to explore eye movement behaviors in instructed and spontaneous lying conditions. The results showed that pupil size and fixation behaviors are both reliable indicators to detect lies in questionnaire surveys. Blink and saccade behaviors do not seem to predict deception. Deception resulted in increased pupil size, fixation count and duration. Meanwhile, respondents focused on different areas of the questionnaire when lying versus telling the truth. Furthermore, in the actual deception situation, the linear support vector machine (SVM) deception classifier achieved an accuracy of 74.09%. In sum, this study indicates the eye-tracking signatures of lying are not restricted to instructed deception, demonstrates the potential of using eye-tracking to detect deception in questionnaire surveys, and contributes to the questionnaire surveys of sensitive issues.</p>
</abstract>
<kwd-group>
<kwd>lie detection</kwd>
<kwd>eye behavior</kwd>
<kwd>questionnaire surveys</kwd>
<kwd>deception</kwd>
<kwd>eye-tracking</kwd>
</kwd-group><counts>
<fig-count count="4"/>
<table-count count="8"/>
<equation-count count="3"/>
<ref-count count="89"/>
<page-count count="15"/>
<word-count count="12293"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="S1">
<title>Introduction</title>
<p>Questionnaire is one of the most widely used tools for data collection due to its wide range of applications, flexibility, speed, and convenience (<xref ref-type="bibr" rid="B62">Taherdoost, 2016</xref>). However, there is subjectivity and freedom when filling out questionnaires. Thus, answers to sensitive questions are often distorted (such as evaluations of self or others, substance use, sexual activities, political opinions, unsocial attitudes) (<xref ref-type="bibr" rid="B30">Holtgraves, 2004</xref>; <xref ref-type="bibr" rid="B36">Krumpal, 2013</xref>). The respondents will go through five stages when answering a self-report: (1) Explain the question. (2) Retrieve information. (3) Generate an opinion. (4) Format a response. (5) Edit the response (<xref ref-type="bibr" rid="B60">Sudman et al., 1997</xref>). The effect of social desirability usually operates at the final editing stage (<xref ref-type="bibr" rid="B64">Tourangeau and Rasinski, 1988</xref>; <xref ref-type="bibr" rid="B60">Sudman et al., 1997</xref>; <xref ref-type="bibr" rid="B30">Holtgraves, 2004</xref>). Respondents weigh the benefits and risks of telling the truth. When the risks are higher than the benefits, the respondent will choose to lie (<xref ref-type="bibr" rid="B65">Tourangeau et al., 2000</xref>; <xref ref-type="bibr" rid="B78">Walzenbach, 2019</xref>). Respondents can exaggerate, minimize, omit, and present themselves in a socially desirable light (<xref ref-type="bibr" rid="B6">Bond and DePaulo, 2006</xref>; <xref ref-type="bibr" rid="B56">Rauhut and Krumpal, 2008</xref>; <xref ref-type="bibr" rid="B53">Preisend&#x00F6;rfer and Wolter, 2014</xref>; <xref ref-type="bibr" rid="B78">Walzenbach, 2019</xref>). Accordingly, lying in surveys can lead to misreported data and reduce the reliability of the findings. Especially, research on sensitive questions is the most likely area of survey misreporting (<xref ref-type="bibr" rid="B40">Lensvelt-Mulders, 2008</xref>; <xref ref-type="bibr" rid="B53">Preisend&#x00F6;rfer and Wolter, 2014</xref>). Fortunately, the design of questionnaires (such as expressions) can change the sensitivity of questions, which will have a massive impact on people&#x2019;s responses when filling out the questionnaire (<xref ref-type="bibr" rid="B78">Walzenbach, 2019</xref>). Thus, it is essential to identify and modify the questions in the questionnaire that tend to cause lying. Lie detection in questionnaire surveys helps to improve the design of questionnaires before they are published, and to avoid using unreliable results.</p>
<p>Lie detection has been a source of fascination. Even though detecting lies is necessary, the accuracy of human detection of lies is around the chance level, with an average of 54% (<xref ref-type="bibr" rid="B6">Bond and DePaulo, 2006</xref>). Deception is usually thought to be correlated with cognitive load. There are three main theoretical perspectives on the relationship between deception and cognitive load. The first theoretical perspective is that lying will experience more complex cognitive processes and bear a higher cognitive load than honesty (<xref ref-type="bibr" rid="B89">Zuckerman et al., 1981</xref>; <xref ref-type="bibr" rid="B74">Vrij et al., 2001</xref>, <xref ref-type="bibr" rid="B76">2017</xref>; <xref ref-type="bibr" rid="B57">Roma et al., 2018</xref>). People will modify the answers that meet social desirability at the response editing stage, and there is more hesitation (<xref ref-type="bibr" rid="B29">Holden et al., 2001</xref>; <xref ref-type="bibr" rid="B20">DePaulo et al., 2003</xref>). The second theoretical perspective is the opposite (<xref ref-type="bibr" rid="B28">Holden et al., 1985</xref>; <xref ref-type="bibr" rid="B38">Leary and Kowalski, 1990</xref>). When lying, the respondents do not need to recall accurate information, they directly respond according to social desirability and do not move through the retrieve information stage. The third theoretical perspective suggests that response time depends on the lying schema and the social desirability of the test item (<xref ref-type="bibr" rid="B10">Brunetti et al., 1998</xref>; <xref ref-type="bibr" rid="B29">Holden et al., 2001</xref>). A previous study conducted a meta-analysis of 26 cognitive lie detection studies with a weighted mean of 74% overall accuracy (<xref ref-type="bibr" rid="B76">Vrij et al., 2017</xref>). Whereas, to date, most studies investigating lie detection have focused on face to face communication, such as criminal justice scenarios (<xref ref-type="bibr" rid="B52">Porter and ten Brinke, 2010</xref>; <xref ref-type="bibr" rid="B61">Synnott et al., 2015</xref>; <xref ref-type="bibr" rid="B75">Vrij and Fisher, 2016</xref>) and conversation scenarios (<xref ref-type="bibr" rid="B73">Vrij, 2018</xref>; <xref ref-type="bibr" rid="B48">Nahari and Nisin, 2019</xref>). The literature investigating the questionnaire surveys without verbal cues is not as rich. Moreover, in the field of lie detection in questionnaire surveys, most studies have only focused on lie detection on personality tests (<xref ref-type="bibr" rid="B71">van Hooft and Born, 2012</xref>; <xref ref-type="bibr" rid="B45">Mazza et al., 2020</xref>). However, questionnaires cover a wide range of areas, not just limited to personality tests. But up to now, far too little attention has been paid to lie detection in broader questionnaire areas.</p>
<p>Extensive studies about lie detection were limited to simulated scenarios, where participants were instructed to lie. Nevertheless, when instructed to lie, participants&#x2019; motivations are low, and they probably do not have any concern with the accuracy and need not fear their behaviors are detected (<xref ref-type="bibr" rid="B72">von Hippel and Trivers, 2011</xref>; <xref ref-type="bibr" rid="B71">van Hooft and Born, 2012</xref>). For this, several authors have proposed that deception detection studies should be conducted in a more ecological way (<xref ref-type="bibr" rid="B83">Wright et al., 2013</xref>; <xref ref-type="bibr" rid="B41">Levine, 2018</xref>). As <xref ref-type="bibr" rid="B23">Ganis et al. (2003)</xref> and <xref ref-type="bibr" rid="B85">Yin et al. (2016)</xref> discussed, there are different patterns of activation while expressing rehearsed or spontaneous lies in fMRI. Furthermore, <xref ref-type="bibr" rid="B19">Delgado-Herrera et al. (2021)</xref> performed a meta-analysis of fMRI deception tasks through a review from 2001 to 2019, and the results showed that the tasks with low ecological validity and high ecological validity lead to different areas of brain activation, perhaps because the tasks with high ecological validity are more realistic, and engage a broader network of brain mechanisms. In contrast, the Concealed Information Test results of <xref ref-type="bibr" rid="B25">Geven et al. (2018)</xref> showed no significant differences in skin conductance, heart rate, and respiration between spontaneous deception and instructed deception. <xref ref-type="bibr" rid="B1">Ask et al. (2020)</xref> found that instructed lies have little effect on human lie-detection performance. Whether the findings of the deception detection for instructed lies can be applied to reality remains controversial. There may be discrepancies between the mental processes of instructed lying and spontaneous lying in real life.</p>
<p>Eye-tracking is often considered an ideal measure for lie detection, as eye behaviors are automatic physiological responses that cannot be consciously controlled (<xref ref-type="bibr" rid="B12">Chen et al., 2013</xref>; <xref ref-type="bibr" rid="B26">Gonzales, 2018</xref>; <xref ref-type="bibr" rid="B4">Berkovsky et al., 2019</xref>). Eye-tracking is an appealing sensor for deception detection in questionnaire surveys, as it does not require direct physical contact (which may disturb the respondents), is easy to use, collects diversified information and can be used in automated screening systems (<xref ref-type="bibr" rid="B15">Cook et al., 2012</xref>; <xref ref-type="bibr" rid="B55">Proudfoot et al., 2015</xref>; <xref ref-type="bibr" rid="B88">Zi-Han and Xingshan, 2015</xref>; <xref ref-type="bibr" rid="B84">Ye et al., 2020</xref>). Previous studies showed that eye behaviors reflect people&#x2019;s cognitive load (<xref ref-type="bibr" rid="B86">Zagermann et al., 2016</xref>), emotions (<xref ref-type="bibr" rid="B87">Zheng et al., 2014</xref>; <xref ref-type="bibr" rid="B51">Perkhofer and Lehner, 2019</xref>; <xref ref-type="bibr" rid="B42">Lim et al., 2020</xref>), attention (<xref ref-type="bibr" rid="B39">Lee and Ahn, 2012</xref>; <xref ref-type="bibr" rid="B67">Tsai et al., 2012</xref>), information processing (<xref ref-type="bibr" rid="B9">Bruneau et al., 2002</xref>). High cognitive load usually causes pupil dilation, decreased blink rate, increased saccade velocity and fixation duration (<xref ref-type="bibr" rid="B80">Wang et al., 2014</xref>; <xref ref-type="bibr" rid="B86">Zagermann et al., 2016</xref>; <xref ref-type="bibr" rid="B21">Einh&#x00E4;user, 2017</xref>; <xref ref-type="bibr" rid="B3">Behroozi et al., 2018</xref>; <xref ref-type="bibr" rid="B70">van der Wel and van Steenbergen, 2018</xref>; <xref ref-type="bibr" rid="B34">Keskin et al., 2019</xref>, <xref ref-type="bibr" rid="B35">2020</xref>). Arousal changes can affect blinks, saccades and fixations (<xref ref-type="bibr" rid="B43">Maffei and Angrilli, 2018</xref>), vigilance and fatigue can be detected in saccades, and information process can be predicted in saccades and fixations (<xref ref-type="bibr" rid="B9">Bruneau et al., 2002</xref>; <xref ref-type="bibr" rid="B43">Maffei and Angrilli, 2018</xref>). Fixation location can indicate the area of current focus (<xref ref-type="bibr" rid="B58">Rudmann et al., 2003</xref>). These all help to analyze the mental processes of deception. Furthermore, many studies have applied eye-tracking to detect deception with promising results. Deception changes people&#x2019;s fixation patterns (<xref ref-type="bibr" rid="B69">Twyman et al., 2014</xref>). When lying, the pupil diameter becomes larger due to cognitive load, memory retrieval, vigilance, anxiety, etc. (<xref ref-type="bibr" rid="B68">Twyman et al., 2013</xref>; <xref ref-type="bibr" rid="B54">Proudfoot et al., 2016</xref>). <xref ref-type="bibr" rid="B77">Vrij et al. (2015)</xref> concluded that memory retrieval is greater when lying, so the saccade velocity is higher. <xref ref-type="bibr" rid="B24">George et al. (2017)</xref> found that the blink duration and blink count are higher when lying. <xref ref-type="bibr" rid="B81">Webb et al. (2009)</xref> suggested that people experience greater arousal when lying, resulting in greater pupil dilation and blink frequency. <xref ref-type="bibr" rid="B7">Borza et al. (2018)</xref> analyzed the eye movements to detect deception and obtained an accuracy of 99.3% on the dataset. <xref ref-type="bibr" rid="B71">van Hooft and Born (2012)</xref> found that on the personality test, more fixations occurred on the extreme response options when lying, while more fixations occurred on the middle response options when lying honest. They achieved 82.9% lie detection accuracy with eye-tracking. Consequently, eye behaviors attract more attention as psychological and physiological indicators of lying (<xref ref-type="bibr" rid="B5">Bessonova and Oboznov, 2018</xref>).</p>
<p>In summary, few studies investigated lie detection in questionnaire surveys, and the mental processes of spontaneous lying may not be identical to that of being instructed to lie. Therefore, this study simulated the scene of evaluating teachers to explore whether the subtle reaction of lying could be identified by eye-tracking in the questionnaire research scenario, and examined whether the changes in eye behaviors during instructed lying can be generalizable to spontaneous lying. In Study 1, the relationship between eye-tracking indicators and deception was initially explored, following the study of <xref ref-type="bibr" rid="B71">van Hooft and Born (2012)</xref>, the participants were instructed to lie or be honest. We hypothesized that there would be significant differences in eye behaviors between lying and honesty condition in Study 1, which is consistent with the study of <xref ref-type="bibr" rid="B71">van Hooft and Born (2012)</xref>. However, spontaneous lying in actual situations may cause more diverse mental processes. Consequently, we designed Study 2 to test whether the relationship between eye-tracking indicators and deception is still valid in the actual situation. In Study 2, this study created the motivation for participants to lie, and encouraged participants to lie spontaneously and genuinely. Study 2 investigated the eye behaviors when lying in the actual situation and compared them to the findings of Study 1 to examine if the eye behaviors that appeared in instructed lying still exist in spontaneous lying, and thus identify reliable eye movement indicators for detecting lies. In Study 2, our main hypothesis is that eye-tracking can effectively help to detect deception in questionnaire surveys in realistic situations. The present study has explored whether the eye behaviors in instructed lying can be generalized to reality, found reliable variables for lie detection in the actual situation, and could contribute to understanding the relationship between deception and eye behaviors. Moreover, this study confirmed the potential of eye-tracking in non-verbal lie detection, offered implications for detecting deception in questionnaire surveys.</p>
</sec>
<sec id="S2">
<title>Study 1: Instructed Lie</title>
<sec id="S2.SS1">
<title>Materials and Methods</title>
<sec id="S2.SS1.SSS1">
<title>Scenario</title>
<p>A scenario was set to ask participants to evaluate their teachers. Chinese students are generally respectful of their teachers and desire to please their parents, teachers, and other people in positions of power (<xref ref-type="bibr" rid="B2">Bear et al., 2014</xref>). Chinese cultural expectations of the teacher&#x2013;student relationship are &#x201C;well-defined, rigidly hierarchical and authoritarian&#x201D; (<xref ref-type="bibr" rid="B27">Ho and Ho, 2008</xref>). As the old Chinese idiom says, &#x201C;once my teacher, forever my parents.&#x201D; Students should respect their teachers as they respect their parents, including showing obedience (<xref ref-type="bibr" rid="B32">Hui et al., 2011</xref>). Respect for teachers is a revered virtue in China. Chinese students have high respect for those who provide knowledge and avoid challenging authority (<xref ref-type="bibr" rid="B82">Wei et al., 2015</xref>). Meanwhile, when evaluating leaders, students often worry that their teachers can be able to view their evaluations and thus judge them negatively. Hence, most students will choose to make no bad comments in real-name conditions to prevent adverse effects.</p>
<p>Participants were asked to recall a teacher they disliked. Then they were instructed to fill out the questionnaire according to the actual situation and imagine that the evaluation was in real-name condition.</p>
</sec>
<sec id="S2.SS1.SSS2">
<title>Materials</title>
<p>A questionnaire was designed for teacher evaluation. The questionnaire consists of 10 questions, including the evaluation of teaching level and attitude toward the teachers. A five-point scale was used in the study, with negative and positive keywords on either side of the options. Furthermore, this study defined several areas of interest (AOIs). The question text (QT), the extreme negative option (NO), the negative keyword (NK), the extreme positive option (PO), the positive keyword (PK), the extreme options (EO), and the medium options (MO) were defined as boxes of interest. The questionnaire and marked AOIs are shown in <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>Example of stimuli. <bold>(A)</bold> Example of the questionnaire with marked AOIs. <bold>(B)</bold> Example of the questionnaire after translation into English. AOIs, areas of interest.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-12-774961-g001.tif"/>
</fig>
</sec>
<sec id="S2.SS1.SSS3">
<title>Apparatus</title>
<p>An SMI iView X<sup>TM</sup> RED desktop system with a high spatial resolution (0.005&#x00B0;) and a sampling rate of 500 Hz (1-ms temporal resolution) was used to record the participants&#x2019; eye behaviors. The system includes an iView PC. The operator controls the experiment, a 22-in. display screen (pixel resolution 1680 &#x00D7; 1050) to show the experimental stimuli to the participants. And an eye-tracking module was installed under the display screen to track the real-time eye behaviors of the participants.</p>
</sec>
<sec id="S2.SS1.SSS4">
<title>Participants</title>
<p>Thirty one participants, including 18 males and 13 females, were recruited from Sichuan University, aged 20&#x2013;26 (<italic>M</italic> = 22.68). All participants were healthy, had normal or corrected-to-normal visions, and had no reported history of neurological or psychiatric disorders. All of them received a small honorarium for their participation.</p>
</sec>
<sec id="S2.SS1.SSS5">
<title>Procedure</title>
<p>Firstly, participants were asked to recall a teacher whom they disliked and describe him/her simply. Then, participants were told that they would evaluate the teacher through questionnaires, and their eye behaviors were recorded. Afterward, participants were provided with instructions that directed them to respond honestly or to imagine responding under the condition of real-name evaluation. Each participant was required to answer the questionnaire in the above two situations. The instructions were adapted from previous studies (<xref ref-type="bibr" rid="B46">McFarland and Ryan, 2000</xref>; <xref ref-type="bibr" rid="B71">van Hooft and Born, 2012</xref>). To eliminate the influence of order, the order of lying and honesty is random. An irrelevant questionnaire would be interspersed between the two responses to eliminate learning effects.</p>
<p>The instruction for encouraging participants to respond honestly is as follows:</p>
<p><italic>You will be presented with ten questions with five response options. Please answer the questions as honestly as possible. Your answers remain confidential and will be used for research purposes only. For this study, we are interested in your honest answers, so please answer the following questions as accurately and honestly as possible.</italic></p>
<p>The instruction for directing participants to imagine evaluation as a real-name situation to respond is as follows:</p>
<p><italic>You will be presented with ten questions with five response options. Please imagine that the teacher you are evaluating can see your answers in real-name. For this study, we are not interested in your honest answers. Instead, for each question, please select the answer you think is more beneficial to you.</italic></p>
<p>After understanding the requirements, Participants sat about 60 cm from the screen. After 2&#x2013;4 times of eye-tracking calibration, the experimental material was displayed on the screen. The participants were required to respond to complete the evaluation questionnaire. By comparing the differences in the participants&#x2019; ratings, this study selected the questions that were rated differently. Afterward, we confirmed with participants whether the differences of rating in each question were caused by lying in the imagined real-name condition. The procedure of the experiment is shown in <xref ref-type="fig" rid="F2">Figure 2</xref>.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>The experiment procedure of Study 1.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-12-774961-g002.tif"/>
</fig>
</sec>
<sec id="S2.SS1.SSS6">
<title>Data Analysis</title>
<p>Eye movements were recorded and processed by Experiment Center 3.5 and BeGaze 3.5 on iView PC. Data analyses were performed using SPSS 26.0.</p>
<p>Each participant answered the questionnaire both in the condition of honesty and the condition of imagining the evaluation as a real-name situation. The questionnaire consisted of 10 questions, so each participant answered 10 pairs of questions, generating 10 paired data set. Participants&#x2019; ratings when instructed to be honest were considered to be true evaluations in this study. Most participants did not lie on every single question, so this study compared the two responses of questionnaires for differences in ratings. We removed the eye-tracking data for questions that did not rate higher in instructed lying condition than in honest condition, as well as the data of questions participants indicated that no lies existed. Meanwhile, we excluded eye-tracking data showing loss of eye movements and the extreme values that the boxplot indicated.</p>
<p>The normality of data was investigated using the Kolmogorov&#x2013;Smirnov test. In the case of normal distributed data paired-sample <italic>t</italic>-test to test for the differences was performed. For the data out from normal, this study attempted a transformation, and conducted a paired-sample <italic>t</italic>-test to analyze variables that could be transformed to normal distributions. The transformation is according to the methods in the previous study (<xref ref-type="bibr" rid="B16">Coolican, 2014</xref>). For the variables that could not be transformed to normal distributions, Wilcoxon signed ranks test was performed. The significance level is 0.05. According to the previous studies (<xref ref-type="bibr" rid="B22">Fritz et al., 2011</xref>; <xref ref-type="bibr" rid="B16">Coolican, 2014</xref>; <xref ref-type="bibr" rid="B63">Tomczak and Tomczak, 2014</xref>), Cohen&#x2019;s <italic>d</italic> is used to indicate the effect size of paired-sample <italic>t</italic>-test, and <italic>r</italic> is frequently used to indicate the effect size of Wilcoxon signed ranks test. In Study 1, paired-samples <italic>t</italic>-test was conducted for ratings and pupil size, Wilcoxon signed ranks test was conducted for blinks, saccades, and fixations.</p>
</sec>
</sec>
<sec sec-type="results" id="S2.SS2">
<title>Results</title>
<sec id="S2.SS2.SSS1">
<title>Ratings</title>
<p>The results showed that 87.1% of the participants (27 of 31) chose to lie when imagining responding in the condition of real-name evaluation. In the real-name condition, the evaluation received higher ratings (<italic>M</italic> = 3.590, <italic>SD</italic> = 1.197) than in the honest condition (<italic>M</italic> = 2.255, <italic>SD</italic> = 1.211). A paired-samples <italic>t</italic>-test was conducted, the results revealed a significant difference between the ratings for the real-name condition and the honest condition (<italic>t</italic> = 15.861, Cohen&#x2019;s <italic>d</italic> = 0.901, <italic>p</italic> &#x003C; 0.001). 73.55% of all questions (288 of 310) were rated higher in the real-name condition, and were confirmed that lies existed by participants. Most participants responded slightly to completely different in the two conditions.</p>
</sec>
<sec id="S2.SS2.SSS2">
<title>Pupil Size</title>
<p>As shown in <xref ref-type="table" rid="T1">Table 1</xref>, the results showed no significant differences in pupil size between lying and being honest (<italic>p</italic> = 0.722). However, when analyzing the participants&#x2019; pupil size in each area of interest, the results showed that the pupil size of who was lying was significantly larger than that of the participant who was honest in the MO area (Cohen&#x2019;s <italic>d</italic> = 0.310, <italic>p</italic> = 0.001).</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>The analysis of pupil size in Study 1.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Variable</td>
<td valign="top" align="center" colspan="2">Instructed lie<hr/></td>
<td valign="top" align="center" colspan="2">Truth<hr/></td>
<td valign="top" align="center"><italic>t</italic></td>
<td valign="top" align="center">Effect size Cohen&#x2019;s <italic>d</italic></td>
<td valign="top" align="center"><italic>P</italic>-value (2-tailed)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td/>
<td/>
<td/>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Pupil diameter (mm)</td>
<td valign="top" align="center">4.243</td>
<td valign="top" align="center">0.456</td>
<td valign="top" align="center">4.239</td>
<td valign="top" align="center">0.409</td>
<td valign="top" align="center">0.356</td>
<td valign="top" align="center">0.026</td>
<td valign="top" align="center">0.722</td>
</tr>
<tr>
<td valign="top" align="left">Pupil diameter in the MO area (mm)</td>
<td valign="top" align="center">4.274</td>
<td valign="top" align="center">0.428</td>
<td valign="top" align="center">4.204</td>
<td valign="top" align="center">0.395</td>
<td valign="top" align="center">3.408</td>
<td valign="top" align="center">0.310</td>
<td valign="top" align="center">0.001<xref ref-type="table-fn" rid="t1fn1">&#x002A;&#x002A;</xref></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="t1fn1"><p><italic>Significance codes: &#x002A;&#x002A;<italic>p</italic> &#x003C; 0.01.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="S2.SS2.SSS3">
<title>Blinks</title>
<p>This study found increasing blink count (<italic>r</italic> = &#x2212;0.263, <italic>p</italic> &#x003C; 0.001), blink frequency (<italic>r</italic> = &#x2212;0.232, <italic>p</italic> = 0.001) and total blink duration (<italic>r</italic> = &#x2212;0.006, <italic>p</italic> = 0.006) in lying, but no changes in average blink duration (<italic>p</italic> &#x003E; 0.05), as shown in <xref ref-type="table" rid="T2">Table 2</xref>.</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>The analysis of blink behaviors, saccade behaviors and fixation behaviors in Study 1.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Variable</td>
<td valign="top" align="center" colspan="2">Spontaneous lie<hr/></td>
<td valign="top" align="center" colspan="2">Truth<hr/></td>
<td valign="top" align="center"><italic>Z</italic></td>
<td valign="top" align="center">Effect size <italic>r</italic></td>
<td valign="top" align="center"><italic>P</italic>-value (2-tailed)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td/>
<td/>
<td valign="top" align="center"/>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Blink count</td>
<td valign="top" align="right">2.258</td>
<td valign="top" align="right">2.829</td>
<td valign="top" align="right">1.535</td>
<td valign="top" align="right">1.717</td>
<td valign="top" align="center">&#x2212;3.839</td>
<td valign="top" align="center">&#x2212;0.263</td>
<td valign="top" align="center">&#x003C; 0.001<xref ref-type="table-fn" rid="t2fn1">&#x002A;&#x002A;&#x002A;</xref></td>
</tr>
<tr>
<td valign="top" align="left">Blink frequency (count/s)</td>
<td valign="top" align="right">0.552</td>
<td valign="top" align="right">0.709</td>
<td valign="top" align="right">0.424</td>
<td valign="top" align="right">0.520</td>
<td valign="top" align="center">&#x2212;3.364</td>
<td valign="top" align="center">&#x2212;0.232</td>
<td valign="top" align="center">0.001<xref ref-type="table-fn" rid="t2fn1">&#x002A;&#x002A;</xref></td>
</tr>
<tr>
<td valign="top" align="left">Blink duration average (ms)</td>
<td valign="top" align="right">147.297</td>
<td valign="top" align="right">106.918</td>
<td valign="top" align="right">152.751</td>
<td valign="top" align="right">167.271</td>
<td valign="top" align="center">&#x2212;0.085</td>
<td valign="top" align="center">&#x2212;0.006</td>
<td valign="top" align="center">0.933</td>
</tr>
<tr>
<td valign="top" align="left">Blink duration total (ms)</td>
<td valign="top" align="right">425.851</td>
<td valign="top" align="right">610.696</td>
<td valign="top" align="right">314.900</td>
<td valign="top" align="right">396.193</td>
<td valign="top" align="center">&#x2212;2.772</td>
<td valign="top" align="center">&#x2212;0.190</td>
<td valign="top" align="center">0.006&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">Saccade count</td>
<td valign="top" align="right">26.047</td>
<td valign="top" align="right">13.945</td>
<td valign="top" align="right">23.324</td>
<td valign="top" align="right">12.076</td>
<td valign="top" align="center">&#x2212;2.720</td>
<td valign="top" align="center">&#x2212;0.186</td>
<td valign="top" align="center">0.007&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">Saccade frequency (count/s)</td>
<td valign="top" align="right">6.359</td>
<td valign="top" align="right">2.727</td>
<td valign="top" align="right">6.528</td>
<td valign="top" align="right">2.996</td>
<td valign="top" align="center">&#x2212;1.408</td>
<td valign="top" align="center">&#x2212;0.097</td>
<td valign="top" align="center">0.159</td>
</tr>
<tr>
<td valign="top" align="left">Saccade duration average (ms)</td>
<td valign="top" align="right">47.074</td>
<td valign="top" align="right">6.910</td>
<td valign="top" align="right">48.894</td>
<td valign="top" align="right">8.337</td>
<td valign="top" align="center">&#x2212;2.578</td>
<td valign="top" align="center">&#x2212;0.177</td>
<td valign="top" align="center">0.010<xref ref-type="table-fn" rid="t2fn1">&#x002A;</xref></td>
</tr>
<tr>
<td valign="top" align="left">Saccade duration total (ms)</td>
<td valign="top" align="right">1223.823</td>
<td valign="top" align="right">672.553</td>
<td valign="top" align="right">1136.381</td>
<td valign="top" align="right">627.435</td>
<td valign="top" align="center">&#x2212;1.737</td>
<td valign="top" align="center">&#x2212;0.130</td>
<td valign="top" align="center">0.082</td>
</tr>
<tr>
<td valign="top" align="left">Saccade velocity (&#x00B0;/s)</td>
<td valign="top" align="right">92.546</td>
<td valign="top" align="right">35.543</td>
<td valign="top" align="right">91.922</td>
<td valign="top" align="right">30.948</td>
<td valign="top" align="center">&#x2212;2.578</td>
<td valign="top" align="center">&#x2212;0.178</td>
<td valign="top" align="center">0.656</td>
</tr>
<tr>
<td valign="top" align="left">Saccade amplitude (&#x00B0;)</td>
<td valign="top" align="right">5.036</td>
<td valign="top" align="right">2.255</td>
<td valign="top" align="right">5.243</td>
<td valign="top" align="right">2.323</td>
<td valign="top" align="center">&#x2212;1.569</td>
<td valign="top" align="center">&#x2212;0.108</td>
<td valign="top" align="center">0.117</td>
</tr>
<tr>
<td valign="top" align="left">Fixation count</td>
<td valign="top" align="right">12.624</td>
<td valign="top" align="right">7.458</td>
<td valign="top" align="right">11.338</td>
<td valign="top" align="right">7.123</td>
<td valign="top" align="center">&#x2212;2.506</td>
<td valign="top" align="center">&#x2212;0.172</td>
<td valign="top" align="center">0.012&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">Fixation frequency (count/s)</td>
<td valign="top" align="right">2.971</td>
<td valign="top" align="right">1.193</td>
<td valign="top" align="right">2.994</td>
<td valign="top" align="right">1.412</td>
<td valign="top" align="center">&#x2212;1.000</td>
<td valign="top" align="center">&#x2212;0.069</td>
<td valign="top" align="center">0.318</td>
</tr>
<tr>
<td valign="top" align="left">Fixation duration average (ms)</td>
<td valign="top" align="right">149.953</td>
<td valign="top" align="right">49.734</td>
<td valign="top" align="right">142.683</td>
<td valign="top" align="right">50.028</td>
<td valign="top" align="center">&#x2212;2.015</td>
<td valign="top" align="center">&#x2212;0.145</td>
<td valign="top" align="center">0.044&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">Fixation duration total (ms)</td>
<td valign="top" align="right">2006.528</td>
<td valign="top" align="right">1231.404</td>
<td valign="top" align="right">1807.049</td>
<td valign="top" align="right">1143.751</td>
<td valign="top" align="center">&#x2212;2.863</td>
<td valign="top" align="center">&#x2212;0.206</td>
<td valign="top" align="center">0.004&#x002A;&#x002A;</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="t2fn1"><p><italic>Significance codes: &#x002A;&#x002A;&#x002A;<italic>p</italic> &#x003C; 0.001, &#x002A;&#x002A;<italic>p</italic> &#x003C; 0.01, &#x002A;<italic>p</italic> &#x003C; 0.05.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="S2.SS2.SSS4">
<title>Saccades</title>
<p>There were no significant differences in frequency, total duration, velocity, and amplitude of saccade (<italic>p</italic> &#x003E; 0.05) between deceptive and truthful responses, while there were significant differences in saccade count (<italic>r</italic> = &#x2212;0.186, <italic>p</italic> = 0.007) and average saccade duration (<italic>r</italic> = &#x2212;0.177, <italic>p</italic> = 0.010). Deception caused increased saccade count and decreased average saccade duration (see <xref ref-type="table" rid="T2">Table 2</xref>).</p>
</sec>
<sec id="S2.SS2.SSS5">
<title>Fixations</title>
<p>As presented in <xref ref-type="table" rid="T2">Table 2</xref>, there was no significant difference in fixation frequency (<italic>r</italic> = &#x2212;0.069, <italic>p</italic> &#x003E; 0.05) between deceptive and truthful respond; by contrast, there were significant differences in fixation count (<italic>r</italic> = &#x2212;0.172, <italic>p</italic> = 0.012), average fixation duration (<italic>r</italic> = &#x2212;0.145, <italic>p</italic> = 0.044) and total fixation duration (<italic>r</italic> = &#x2212;0.206, <italic>p</italic> = 0.004). Lies resulted in higher fixation counts, total fixation duration and average fixation duration.</p>
<p>This study analyzed the fixation behaviors in AOIs (see <xref ref-type="table" rid="T3">Table 3</xref>). In the NO and NK areas of lying participants, the fixation count (<italic>r</italic><sub><italic>NO</italic></sub> = &#x2212;0.490, <italic>r</italic><sub><italic>NK</italic></sub> = &#x2212;0.341, <italic>p</italic> &#x003C; 0.001) and the percentage fixation time (<italic>r</italic><sub><italic>NO</italic></sub> = &#x2212;0.521, <italic>r</italic><sub><italic>NK</italic></sub> = &#x2212;0.400, <italic>p</italic> &#x003C; 0.001) were lower than that of the honest one. In the PO and PK areas, the fixation count (<italic>r</italic><sub><italic>PO</italic></sub> = 0.522, <italic>r</italic><sub><italic>PK</italic></sub> = 0.326, <italic>p</italic> &#x003C; 0.001) and the percentage fixation time (<italic>r</italic><sub><italic>PO</italic></sub> = 0.516, <italic>r</italic><sub><italic>PK</italic></sub> = 0.366, <italic>p</italic> &#x003C; 0.001) were significantly higher in the lying condition. There were no significant differences in the QT, MO, and EO areas (<italic>p</italic> &#x003E; 0.05).</p>
<table-wrap position="float" id="T3">
<label>TABLE 3</label>
<caption><p>The analysis of fixation behaviors in the AOIs in Study 1.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">AOI</td>
<td valign="top" align="center">Variable</td>
<td valign="top" align="center" colspan="2">Spontaneous lie<hr/></td>
<td valign="top" align="center" colspan="2">Truth<hr/></td>
<td valign="top" align="center"><italic>Z</italic></td>
<td valign="top" align="center">Effect size <italic>r</italic></td>
<td valign="top" align="center"><italic>P</italic>-value (2-tailed)</td>
</tr>
<tr>
<td valign="top" colspan="2"/>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td/>
<td/>
<td/>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">QT</td>
<td valign="top" align="center">Fixation count</td>
<td valign="top" align="center">6.799</td>
<td valign="top" align="center">4.349</td>
<td valign="top" align="center">6.995</td>
<td valign="top" align="center">4.553</td>
<td valign="top" align="center">&#x2212;0.532</td>
<td valign="top" align="center">&#x2212;0.038</td>
<td valign="top" align="center">0.595</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">The percentage fixation time (%)</td>
<td valign="top" align="center">26.356</td>
<td valign="top" align="center">17.416</td>
<td valign="top" align="center">27.477</td>
<td valign="top" align="center">17.007</td>
<td valign="top" align="center">&#x2212;0.895</td>
<td valign="top" align="center">&#x2212;0.064</td>
<td valign="top" align="center">0.371</td>
</tr>
<tr>
<td valign="top" align="left">NO</td>
<td valign="top" align="center">Fixation count</td>
<td valign="top" align="center">0.144</td>
<td valign="top" align="center">0.455</td>
<td valign="top" align="center">0.814</td>
<td valign="top" align="center">1.123</td>
<td valign="top" align="center">&#x2212;6.820</td>
<td valign="top" align="center">&#x2212;0.490</td>
<td valign="top" align="center">&#x003C;0.001<xref ref-type="table-fn" rid="t3fn1">&#x002A;&#x002A;&#x002A;</xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="center">The percentage fixation time (%)</td>
<td valign="top" align="center">0.409</td>
<td valign="top" align="center">1.285</td>
<td valign="top" align="center">3.313</td>
<td valign="top" align="center">4.936</td>
<td valign="top" align="center">&#x2212;7.259</td>
<td valign="top" align="center">&#x2212;0.521</td>
<td valign="top" align="center">&#x003C;0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">NK</td>
<td valign="top" align="center">Fixation count</td>
<td valign="top" align="center">0.362</td>
<td valign="top" align="center">0.731</td>
<td valign="top" align="center">0.701</td>
<td valign="top" align="center">0.829</td>
<td valign="top" align="center">&#x2212;4.744</td>
<td valign="top" align="center">&#x2212;0.341</td>
<td valign="top" align="center">&#x003C;0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">The percentage fixation time (%)</td>
<td valign="top" align="center">1.180</td>
<td valign="top" align="center">2.553</td>
<td valign="top" align="center">2.838</td>
<td valign="top" align="center">3.769</td>
<td valign="top" align="center">&#x2212;5.568</td>
<td valign="top" align="center">&#x2212;0.400</td>
<td valign="top" align="center">&#x003C;0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">PO</td>
<td valign="top" align="center">Fixation count</td>
<td valign="top" align="center">1.026</td>
<td valign="top" align="center">1.491</td>
<td valign="top" align="center">0.460</td>
<td valign="top" align="center">0.216</td>
<td valign="top" align="center">&#x2212;7.273</td>
<td valign="top" align="center">&#x2212;0.522</td>
<td valign="top" align="center">&#x003C;0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">The percentage fixation time (%)</td>
<td valign="top" align="center">4.883</td>
<td valign="top" align="center">8.698</td>
<td valign="top" align="center">0.842</td>
<td valign="top" align="center">2.299</td>
<td valign="top" align="center">&#x2212;7.187</td>
<td valign="top" align="center">&#x2212;0.516</td>
<td valign="top" align="center">&#x003C;0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">PK</td>
<td valign="top" align="center">Fixation count</td>
<td valign="top" align="center">0.881</td>
<td valign="top" align="center">1.239</td>
<td valign="top" align="center">0.464</td>
<td valign="top" align="center">0.789</td>
<td valign="top" align="center">&#x2212;4.546</td>
<td valign="top" align="center">&#x2212;0.326</td>
<td valign="top" align="center">&#x003C;0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">The percentage fixation time (%)</td>
<td valign="top" align="center">3.170</td>
<td valign="top" align="center">4.256</td>
<td valign="top" align="center">1.492</td>
<td valign="top" align="center">2.724</td>
<td valign="top" align="center">&#x2212;5.094</td>
<td valign="top" align="center">&#x2212;0.366</td>
<td valign="top" align="center">&#x003C;0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">EO</td>
<td valign="top" align="center">Fixation count</td>
<td valign="top" align="center">1.170</td>
<td valign="top" align="center">1.481</td>
<td valign="top" align="center">1.057</td>
<td valign="top" align="center">1.188</td>
<td valign="top" align="center">&#x2212;0.619</td>
<td valign="top" align="center">&#x2212;0.044</td>
<td valign="top" align="center">0.536</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">The percentage fixation time (%)</td>
<td valign="top" align="center">5.350</td>
<td valign="top" align="center">8.724</td>
<td valign="top" align="center">4.301</td>
<td valign="top" align="center">5.651</td>
<td valign="top" align="center">&#x2212;0.472</td>
<td valign="top" align="center">&#x2212;0.034</td>
<td valign="top" align="center">0.637</td>
</tr>
<tr>
<td valign="top" align="left">MO</td>
<td valign="top" align="center">Fixation count</td>
<td valign="top" align="center">2.959</td>
<td valign="top" align="center">2.639</td>
<td valign="top" align="center">2.696</td>
<td valign="top" align="center">2.556</td>
<td valign="top" align="center">&#x2212;1.326</td>
<td valign="top" align="center">&#x2212;0.095</td>
<td valign="top" align="center">0.185</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">The percentage fixation time (%)</td>
<td valign="top" align="center">11.436</td>
<td valign="top" align="center">11.479</td>
<td valign="top" align="center">11.590</td>
<td valign="top" align="center">11.537</td>
<td valign="top" align="center">&#x2212;0.039</td>
<td valign="top" align="center">&#x2212;0.003</td>
<td valign="top" align="center">0.969</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p><italic>AOI, area of interest; QT, the question text; NO, the extreme negative option; NK, the negative keyword; PO, the extreme positive keyword, PK, the positive keyword; EO, the extreme options; MO, the medium options.</italic></p></fn>
<fn id="t3fn1"><p><italic>Significance codes: &#x002A;&#x002A;&#x002A;<italic>p</italic> &#x003C; 0.001.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec id="S2.SS3">
<title>Discussion</title>
<p>Under the instructions, the eye behaviors of participants between lying and being honest showed differences.</p>
<p>The results of Study 1 demonstrated that the participants who were instructed to lie had larger pupil size, higher count, frequency and duration of blink, higher count, velocity, and amplitude of saccade, as well as higher count and duration of fixation. The results of pupil size, saccades and fixations were consistent with the opinion that cognitive load is higher when lying. But the previous study suggested that the higher the cognitive load, the lower the blink rate (<xref ref-type="bibr" rid="B86">Zagermann et al., 2016</xref>). The present study observed that blink behaviors increased when lying, which means the cognitive is lower. However, the results of blink behaviors agreed with the findings that deception would result in greater blink count, blink frequency, and blink duration (<xref ref-type="bibr" rid="B81">Webb et al., 2009</xref>; <xref ref-type="bibr" rid="B24">George et al., 2017</xref>).</p>
<p>In the analysis of AOIs, when the participants were instructed to lie, the fixation count and the percentage fixation time were lower in the NO and NK areas, while higher in the PO and PK areas, compared to when they were honest. In the lying condition, the participants focused more on the positive items, while when being honest, they focused more on the negative items. It is in line with the answers of the questionnaire and mental processes. Nevertheless, there were no significant differences in fixation in the QT, MO, and EO areas.</p>
</sec>
</sec>
<sec id="S3">
<title>Study 2: Spontaneous Lie</title>
<p>In Study 1, this research found some eye-tracking indicators that differ when lying and being honest. However, in Study 1, participants were instructed to lie without real motivation. They did not need to worry about the accuracy of answers, the consequences of being discovered by the teacher, etc. Therefore, the mental processes of instructed lying may differ from the actual situation. In the actual situation, the pressure of evaluating teachers they dislike may lead to more complex mental processes and heavier cognitive loads.</p>
<p>Consequently, to explore whether the eye behaviors can help detect deception in the condition of spontaneous lying as in the condition of instructed lying, we implemented a new design for Study 2. This study simulated a more realistic scenario to identify whether the relationship between eye-tracking indicators and deception is still valid in the actual situation.</p>
<sec id="S3.SS1">
<title>Materials and Methods</title>
<sec id="S3.SS1.SSS1">
<title>Scenario</title>
<p>In the actual situation, people require motivation to lie. They choose to lie when the risks of telling the truth are higher than benefits (<xref ref-type="bibr" rid="B65">Tourangeau et al., 2000</xref>; <xref ref-type="bibr" rid="B78">Walzenbach, 2019</xref>). Participants were given scenarios in which telling the truth was risky to motivate them to lie. We continued to choose scenarios of evaluating teachers like Study 1. Participants were aware that the eye behaviors were recorded, but to motivate spontaneous lying, they were not aware of the purpose of the study. The participants were informed that this study was mainly assisting the school in gathering students&#x2019; evaluations of teachers, and also happened to conduct an eye-tracking study of the questionnaire reading processes. This study created realistic scenarios of evaluating teachers to observe participants&#x2019; performance of deception in the actual situation. We conducted interviews to ask participants to describe the teacher they disliked before the evaluations, and asked them for their names and student numbers to increase authenticity. After the experiment was completed, we explained the real purpose of the study to participants, and confirmed with participants whether they believed the scenario of real-name evaluation of teachers was real.</p>
</sec>
<sec id="S3.SS1.SSS2">
<title>Materials</title>
<p>The content of the questionnaire used in Study 2 is the same as in Study 1. To increase the authenticity of the scenario, we added the school emblem to the questionnaire.</p>
</sec>
<sec id="S3.SS1.SSS3">
<title>Apparatus</title>
<p>Same apparatus as Study 1.</p>
</sec>
<sec id="S3.SS1.SSS4">
<title>Participants</title>
<p>35 participants were recruited from Sichuan University, aged 20&#x2013;24(<italic>M</italic> = 21.77), 18 males and 17 females. All participants were healthy, had a normal or corrected-to-normal vision, and had no reported history of neurological or psychiatric disorders. All of them received a small honorarium for their participation.</p>
</sec>
<sec id="S3.SS1.SSS5">
<title>Procedure</title>
<p>To eliminate the effects of order, the participants were randomly divided into two groups: the truth-lie group and the lie-truth group.</p>
<p>Truth-lie group:</p>
<list list-type="simple">
<list-item>
<label>(1)</label>
<p>To explore the actual mental processes when evaluating the teachers, we <italic>described the purpose of the study as exploring the relationship between eye behaviors and questionnaires</italic>.</p>
 </list-item>
<list-item>
<label>(2)</label>
<p>An interview was conducted to ask each participant to describe a teacher they disliked the most during college life. We emphasized that the interview was anonymous and not recorded.</p>
 </list-item>
<list-item>
<label>(3)</label>
<p>We asked participants to fill out the questionnaire in front of the eye tracker. We <italic>asked participants to fill in the questionnaires according to the interview content and to answer honestly.</italic> The eye tracker recorded all eye behaviors during the questionnaire filling process.</p>
 </list-item>
<list-item>
<label>(4)</label>
<p>After completing the questionnaire, we explained: <italic>In addition to the scientific study, our main purpose was to assist the school to investigate students&#x2019; satisfaction with the teachers. The questionnaire was real-name and would be recorded. Moreover, the teacher they evaluated could see the responses and respondents.</italic> We asked participants for their names and student numbers to increase authenticity. <italic>Participants were asked if they wanted to fill out the questionnaire again and invalidate the first one.</italic> If participants agreed, they would complete the questionnaire a second time. All eye behaviors were recorded using an eye tracker. An irrelevant questionnaire would be interspersed between the two responses to eliminate learning effects.</p>
 </list-item>
<list-item>
<label>(5)</label>
<p>When participants completed the questionnaire or refused to fill out the questionnaire again, we explained that the scenario of evaluating teachers was simulated, and told them the study&#x2019;s actual purpose. The responses of the questionnaire would be completely confidential and anonymous.</p>
 </list-item>
<list-item>
<label>(6)</label>
<p>We confirmed with participants whether they believed the scenario of real-name evaluation of teachers was real. Further, we checked with participants whether the differences in ratings for each question was caused by lying.</p>
 </list-item>
</list>
<p>Lie-truth group:</p>
<list list-type="simple">
<list-item>
<label>(1)</label>
<p>To explore the actual mental processes when evaluating the teachers, <italic>we described the purpose of the study as exploring the relationship between eye-tracking indicators and questionnaires surveys. We emphasized that, in addition to the scientific study, our main purpose was to assist the school to investigate the students&#x2019; satisfaction with teachers.</italic> We asked participants for their names and student numbers to increase authenticity.</p>
 </list-item>
<list-item>
<label>(2)</label>
<p>An interview was conducted to ask each participant to describe a teacher they disliked the most during college life. We emphasized that the interview was anonymous and not recorded.</p>
 </list-item>
<list-item>
<label>(3)</label>
<p>We asked participants to fill out the questionnaire in front of the eye tracker. The teachers that participants mentioned in the interview were evaluated in the questionnaire. The eye tracker recorded all eye behaviors during the questionnaire filling process. <italic>We emphasized that the questionnaire was real-name and would be recorded. Moreover, the teacher they evaluated could see the responses and respondents.</italic></p>
 </list-item>
<list-item>
<label>(4)</label>
<p><italic>After completing the questionnaire, we explained to participants that the scenario of evaluating teachers was simulated.</italic> We told participants that this study aimed to explore the relationship between eye-tracking indicators and deception in questionnaire surveys. The responses of the questionnaire would be completely confidential and anonymous.</p>
 </list-item>
<list-item>
<label>(5)</label>
<p><italic>We asked participants to fill out the questionnaire again for the same teachers according to the actual situation.</italic> An irrelevant questionnaire would be interspersed between the two responses to eliminate learning effects.</p>
 </list-item>
<list-item>
<label>(6)</label>
<p>We confirmed with participants whether they believed the scenario of real-name evaluation of teachers was real. Further, we checked with participants whether the differences in ratings for each question was caused by lying.</p>
 </list-item>
</list>
<p>The procedure of the experiment is shown in <xref ref-type="fig" rid="F3">Figure 3</xref>.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>The experiment procedure of Study 2.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-12-774961-g003.tif"/>
</fig>
</sec>
<sec id="S3.SS1.SSS6">
<title>Data Analysis</title>
<p>In Study 2, paired-samples <italic>t</italic>-test was conducted for ratings and pupil size, Wilcoxon signed ranks test was conducted for blinks, saccades, and fixations. In addition to the same data collection and analysis as in Study 1, this study built a lie/truth classifier based on the eye-tracking data obtained in Study 2 to explore the accuracy of eye behaviors in detecting spontaneous lying.</p>
<p>We tried a range of classification algorithms, such as decision tree, discriminant analysis, support vector machine (SVM), nearest neighbor classifier, ensemble classifier, etc. The classifiers were developed using the classification learner in MATLAB. The most promising classification accuracy came from a linear SVM, and the performance of all classifiers is shown in <xref ref-type="supplementary-material" rid="TS1">Supplementary Table S1</xref>. SVM classifier is a type of supervised machine learning approach that attempts to distinguish between two classes of data points separated by a hyperplane in a high dimensional space (<xref ref-type="bibr" rid="B17">Cortes and Vapnik, 1995</xref>; <xref ref-type="bibr" rid="B13">Chen et al., 2020</xref>). SVM is widely used to deal with classification problems in machine learning (<xref ref-type="bibr" rid="B11">Byvatov et al., 2003</xref>; <xref ref-type="bibr" rid="B50">Peltier et al., 2009</xref>), many studies of lie detection (<xref ref-type="bibr" rid="B47">Mottelson et al., 2018</xref>; <xref ref-type="bibr" rid="B45">Mazza et al., 2020</xref>; <xref ref-type="bibr" rid="B44">Mathur and Matari&#x0107;, 2020</xref>) or eye movements (<xref ref-type="bibr" rid="B31">Huang et al., 2015</xref>; <xref ref-type="bibr" rid="B18">Dalrymple et al., 2019</xref>; <xref ref-type="bibr" rid="B59">Steil et al., 2019</xref>; <xref ref-type="bibr" rid="B33">Kang et al., 2020</xref>) have used SVM for classification.</p>
<p>We used cross-validation techniques in which the train and test sets are rotated over the entire data set. We conducted five-fold cross-validation. One fold was used to validate the modal trained using the remaining folds. This process is repeated five times such that each fold is used exactly once for validation. After the preprocessing procedure for data quality control, there were 184 lying trials and 175 truth trials left. The number of lying trials in five folds is 37, 37, 37, 37, and 36, respectively. And the number of truth trials in each fold is 35. To measure how well the predictors work, this study adopted the following evaluation metrics: accuracy, precision, recall, F<sub>1</sub>-score, and area under the curve of receiver operating characteristics (AUC ROC). AUC provides an aggregate measure of performance across all possible classification thresholds. The accuracy, precision and recall relate to true positives (TP), false positives (FP) and false negatives (FN) as:</p>
<disp-formula id="S3.E1">
<mml:math id="M1">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo rspace="12.5pt">,</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mrow>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:math>
</disp-formula>
<disp-formula id="S3.E2">
<mml:math id="M2">
<mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>The F<sub>1</sub>-score can be interpreted as a weighted average of the precision and recall, and is defined as:</p>
<disp-formula id="S3.E3">
<mml:math id="M3">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>F</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x00D7;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo>&#x00D7;</mml:mo>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>i</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>l</mml:mi>
<mml:mi>l</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</sec>
</sec>
<sec id="S3.SS2">
<title>Results</title>
<sec id="S3.SS2.SSS1">
<title>Ratings</title>
<p>94.3% of the participants (33 of 35) believed the scenario of real-name evaluation of teachers was real and chose to lie. Participants gave higher ratings in the real-name condition (<italic>M</italic> = 3.937, SD = 0.820) than that in the honest condition (<italic>M</italic> = 2.790, SD = 1.042). The results of the paired-samples <italic>t</italic>-test showed a significant difference between the ratings for the real-name condition and the honest condition (<italic>t</italic> = 19.934, Cohen&#x2019;s <italic>d</italic> = 1.070, <italic>p</italic> &#x003C; 0.001). 68.86% of all questions (241 of 350) were rated higher in the real-name condition, and were confirmed that lies existed by participants. In the spontaneous lie condition, most participants lied slightly to completely.</p>
</sec>
<sec id="S3.SS2.SSS2">
<title>Pupil Size</title>
<p>There was a significant difference in pupil size between deceptive and truthful responses (Cohen&#x2019;s <italic>d</italic> = 0.858, <italic>p</italic> &#x003C; 0.001). Lies resulted in larger pupil diameter, as shown in <xref ref-type="table" rid="T4">Table 4</xref>.</p>
<table-wrap position="float" id="T4">
<label>TABLE 4</label>
<caption><p>The analysis of pupil size in Study 2.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Variable</td>
<td valign="top" align="center" colspan="2">Spontaneous lie<hr/></td>
<td valign="top" align="center" colspan="2">Truth<hr/></td>
<td valign="top" align="center">t</td>
<td valign="top" align="center">Effect size Cohen&#x2019;s <italic>d</italic></td>
<td valign="top" align="center"><italic>P</italic>-value (2-tailed)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td/>
<td/>
<td/>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Pupil diameter (mm)</td>
<td valign="top" align="center">4.431</td>
<td valign="top" align="center">0.506</td>
<td valign="top" align="center">4.347</td>
<td valign="top" align="center">0.536</td>
<td valign="top" align="center">3.361</td>
<td valign="top" align="center">0.858</td>
<td valign="top" align="center">&#x003C;0.001<xref ref-type="table-fn" rid="t4fn1">&#x002A;&#x002A;&#x002A;</xref></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="t4fn1"><p><italic>Significance codes: &#x002A;&#x002A;&#x002A;<italic>p</italic> &#x003C; 0.001.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="S3.SS2.SSS3">
<title>Blinks</title>
<p>In Study 2, as shown in <xref ref-type="table" rid="T5">Table 5</xref>, there were no significant differences in count, frequency, average duration, and total duration of blink between deceptive and truthful responses (<italic>p</italic> &#x003E; 0.05).</p>
<table-wrap position="float" id="T5">
<label>TABLE 5</label>
<caption><p>The analysis of blink behaviors, saccade behaviors and fixation behaviors in Study 2.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Variable</td>
<td valign="top" align="center" colspan="2">Spontaneous lie<hr/></td>
<td valign="top" align="center" colspan="2">Truth<hr/></td>
<td valign="top" align="center"><italic>Z</italic></td>
<td valign="top" align="center">Effect size <italic>r</italic></td>
<td valign="top" align="center"><italic>P</italic>-value (2-tailed)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td/>
<td/>
<td/>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Blink count</td>
<td valign="top" align="right">1.833</td>
<td valign="top" align="right">3.698</td>
<td valign="top" align="right">2.333</td>
<td valign="top" align="right">3.848</td>
<td valign="top" align="center">&#x2212;1.463</td>
<td valign="top" align="center">&#x2212;0.106</td>
<td valign="top" align="center">0.143</td>
</tr>
<tr>
<td valign="top" align="left">Blink frequency (count/s)</td>
<td valign="top" align="right">0.453</td>
<td valign="top" align="right">0.865</td>
<td valign="top" align="right">0.632</td>
<td valign="top" align="right">1.214</td>
<td valign="top" align="center">&#x2212;1.553</td>
<td valign="top" align="center">&#x2212;0.112</td>
<td valign="top" align="center">0.120</td>
</tr>
<tr>
<td valign="top" align="left">Blink duration average (ms)</td>
<td valign="top" align="right">152.353</td>
<td valign="top" align="right">221.158</td>
<td valign="top" align="right">105.125</td>
<td valign="top" align="right">65.682</td>
<td valign="top" align="center">&#x2212;1.933</td>
<td valign="top" align="center">&#x2212;0.140</td>
<td valign="top" align="center">0.053</td>
</tr>
<tr>
<td valign="top" align="left">Blink duration total (ms)</td>
<td valign="top" align="right">474.554</td>
<td valign="top" align="right">1309.936</td>
<td valign="top" align="right">470.167</td>
<td valign="top" align="right">762.827</td>
<td valign="top" align="center">&#x2212;1.523</td>
<td valign="top" align="center">&#x2212;0.110</td>
<td valign="top" align="center">0.128</td>
</tr>
<tr>
<td valign="top" align="left">Saccade count</td>
<td valign="top" align="right">21.094</td>
<td valign="top" align="right">12.658</td>
<td valign="top" align="right">26.740</td>
<td valign="top" align="right">16.780</td>
<td valign="top" align="center">&#x2212;4.151</td>
<td valign="top" align="center">&#x2212;0.300</td>
<td valign="top" align="center">&#x2004;&#x003C;0.001<xref ref-type="table-fn" rid="t5fn1">&#x002A;&#x002A;&#x002A;</xref></td>
</tr>
<tr>
<td valign="top" align="left">Saccade frequency (count/s)</td>
<td valign="top" align="right">6.095</td>
<td valign="top" align="right">2.820</td>
<td valign="top" align="right">6.646</td>
<td valign="top" align="right">2.842</td>
<td valign="top" align="center">&#x2212;2.716</td>
<td valign="top" align="center">&#x2212;0.196</td>
<td valign="top" align="center">0.007<xref ref-type="table-fn" rid="t5fn1">&#x002A;&#x002A;</xref></td>
</tr>
<tr>
<td valign="top" align="left">Saccade duration average (ms)</td>
<td valign="top" align="right">46.296</td>
<td valign="top" align="right">12.501</td>
<td valign="top" align="right">47.260</td>
<td valign="top" align="right">9.941</td>
<td valign="top" align="center">&#x2212;0.398</td>
<td valign="top" align="center">&#x2212;0.029</td>
<td valign="top" align="center">0.690</td>
</tr>
<tr>
<td valign="top" align="left">Saccade duration total (ms)</td>
<td valign="top" align="right">994.484</td>
<td valign="top" align="right">589.597</td>
<td valign="top" align="right">1280.223</td>
<td valign="top" align="right">844.124</td>
<td valign="top" align="center">&#x2212;4.133</td>
<td valign="top" align="center">&#x2212;0.298</td>
<td valign="top" align="center">&#x2004;&#x003C;0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">Saccade velocity (&#x00B0;/s)</td>
<td valign="top" align="right">91.999</td>
<td valign="top" align="right">23.320</td>
<td valign="top" align="right">91.260</td>
<td valign="top" align="right">28.526</td>
<td valign="top" align="center">&#x2212;0.727</td>
<td valign="top" align="center">&#x2212;0.054</td>
<td valign="top" align="center">0.467</td>
</tr>
<tr>
<td valign="top" align="left">Saccade amplitude (&#x00B0;)</td>
<td valign="top" align="right">5.048</td>
<td valign="top" align="right">1.612</td>
<td valign="top" align="right">5.063</td>
<td valign="top" align="right">2.176</td>
<td valign="top" align="center">&#x2212;0.592</td>
<td valign="top" align="center">&#x2212;0.044</td>
<td valign="top" align="center">0.554</td>
</tr>
<tr>
<td valign="top" align="left">Fixation count</td>
<td valign="top" align="right">11.040</td>
<td valign="top" align="right">5.466</td>
<td valign="top" align="right">11.347</td>
<td valign="top" align="right">6.629</td>
<td valign="top" align="center">&#x2212;0.037</td>
<td valign="top" align="center">&#x2212;0.003</td>
<td valign="top" align="center">0.971</td>
</tr>
<tr>
<td valign="top" align="left">Fixation frequency (count/s)</td>
<td valign="top" align="right">3.273</td>
<td valign="top" align="right">1.132</td>
<td valign="top" align="right">3.038</td>
<td valign="top" align="right">1.268</td>
<td valign="top" align="center">&#x2212;2.265</td>
<td valign="top" align="center">&#x2212;0.171</td>
<td valign="top" align="center">0.024<xref ref-type="table-fn" rid="t5fn1">&#x002A;</xref></td>
</tr>
<tr>
<td valign="top" align="left">Fixation duration average (ms)</td>
<td valign="top" align="right">155.482</td>
<td valign="top" align="right">48.075</td>
<td valign="top" align="right">136.936</td>
<td valign="top" align="right">41.516</td>
<td valign="top" align="center">&#x2212;4.531</td>
<td valign="top" align="center">&#x2212;0.342</td>
<td valign="top" align="center">&#x2004;&#x003C;0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">Fixation duration total (ms)</td>
<td valign="top" align="right">1708.461</td>
<td valign="top" align="right">934.102</td>
<td valign="top" align="right">1604.671</td>
<td valign="top" align="right">1104.755</td>
<td valign="top" align="center">&#x2212;1.902</td>
<td valign="top" align="center">&#x2212;0.143</td>
<td valign="top" align="center">0.150</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="t5fn1"><p><italic>Significance codes: &#x002A;&#x002A;&#x002A;<italic>p</italic> &#x003C; 0.001, &#x002A;&#x002A;<italic>p</italic> &#x003C; 0.01, &#x002A;<italic>p</italic> &#x003C; 0.05.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="S3.SS2.SSS4">
<title>Saccades</title>
<p>There were no differences in average duration, velocity, and amplitude of saccade (<italic>p</italic> &#x003E; 0.05). The differences in saccade count (<italic>r</italic> = &#x2212;0.300, <italic>p</italic> &#x003C; 0.001), saccade frequency (<italic>r</italic> = &#x2212;0.196, <italic>p</italic> = 0.007) and total saccade duration (<italic>r</italic> = &#x2212;0.298, <italic>p</italic> &#x003C; 0.001) between deceptive and truthful responses were statistically significant. When lying, the saccade count, the saccade frequency, and the total saccade duration were significantly lower (see <xref ref-type="table" rid="T5">Table 5</xref>).</p>
</sec>
<sec id="S3.SS2.SSS5">
<title>Fixations</title>
<p>As can be seen from the <xref ref-type="table" rid="T5">Table 5</xref>, there were no significant differences in fixation count, total fixation duration between deceptive and truthful responses (<italic>p</italic> &#x003E; 0.05), while there were significant differences in fixation frequency (<italic>r</italic> = &#x2212;0.171, <italic>p</italic> = 0.016) and average fixation duration (<italic>r</italic> = &#x2212;0.342, <italic>p</italic> &#x003C; 0.001). Deception caused increased fixation frequency and average fixation duration.</p>
<p>The fixation behaviors in AOIs were analyzed (see <xref ref-type="table" rid="T6">Table 6</xref>). In the QT area, there were no significant differences in fixation count between deceptive and truthful responses (<italic>p</italic> &#x003E; 0.05). However, when lying, the percentage fixation time (<italic>r</italic> = &#x2212;0.233, <italic>p</italic> = 0.002) was significantly higher. In the NO area, the fixation count (<italic>r</italic> = &#x2212;0.340, <italic>p</italic> &#x003C; 0.001) and the percentage fixation time (<italic>r</italic> = &#x2212;0.354, <italic>p</italic> &#x003C; 0.001) were significantly lower when lying. The fixation count was significantly lower in the NK area when lying (<italic>r</italic> = &#x2212;0.152, <italic>p</italic> = 0.043), while the percentage fixation time was no significant differences (<italic>p</italic> &#x003E; 0.05). In the PO and PK areas, when in the lying condition, the fixation count (<italic>r</italic><sub><italic>PO</italic></sub> = &#x2212;0.428, <italic>r</italic><sub><italic>PK</italic></sub> = &#x2212;0.323, <italic>p</italic> &#x003C; 0.001) and the percentage fixation time (<italic>r</italic><sub><italic>PO</italic></sub> = &#x2212;0.487, <italic>r</italic><sub><italic>PK</italic></sub> = &#x2212;0.458, <italic>p</italic> &#x003C; 0.001) were significantly higher than when in honesty condition. In the EO area, lies caused increased fixation count (<italic>r</italic> = &#x2212;0.195, <italic>p</italic> = 0.010) and percentage fixation time (<italic>r</italic> = 0.199, <italic>p</italic> = 0.008). Meanwhile, in the MO area, lies resulted in higher fixation count (<italic>r</italic> = &#x2212;0.149, <italic>p</italic> = 0.048).</p>
<table-wrap position="float" id="T6">
<label>TABLE 6</label>
<caption><p>The analysis of fixation behaviors in the AOIs in Study 2.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">AOI</td>
<td valign="top" align="left">Variable</td>
<td valign="top" align="center" colspan="2">Spontaneous lie<hr/></td>
<td valign="top" align="center" colspan="2">Truth<hr/></td>
<td valign="top" align="center"><italic>Z</italic></td>
<td valign="top" align="center">Effect size <italic>r</italic></td>
<td valign="top" align="center"><italic>P</italic>-value (2-tailed)</td>
</tr>
<tr>
<td valign="top" colspan="2"/>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td valign="top" align="center"><italic>M</italic></td>
<td valign="top" align="center"><italic>SD</italic></td>
<td/>
<td/>
<td valign="top" align="center"/>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">QT</td>
<td valign="top" align="left">Fixation count</td>
<td valign="top" align="center">5.648</td>
<td valign="top" align="center">3.592</td>
<td valign="top" align="center">6.097</td>
<td valign="top" align="center">4.563</td>
<td valign="top" align="center">&#x2212;0.663</td>
<td valign="top" align="center">&#x2212;0.050</td>
<td valign="top" align="center">0.508</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">The percentage fixation time (%)</td>
<td valign="top" align="center">25.501</td>
<td valign="top" align="center">15.641</td>
<td valign="top" align="center">21.488</td>
<td valign="top" align="center">15.736</td>
<td valign="top" align="center">&#x2212;3.085</td>
<td valign="top" align="center">&#x2212;0.233</td>
<td valign="top" align="center">0.002<xref ref-type="table-fn" rid="t6fn1">&#x002A;&#x002A;</xref></td>
</tr>
<tr>
<td valign="top" align="left">NO</td>
<td valign="top" align="left">Fixation count</td>
<td valign="top" align="center">0.080</td>
<td valign="top" align="center">0.292</td>
<td valign="top" align="center">0.455</td>
<td valign="top" align="center">1.089</td>
<td valign="top" align="center">&#x2212;4.509</td>
<td valign="top" align="center">&#x2212;0.340</td>
<td valign="top" align="center">&#x003C; 0.001<xref ref-type="table-fn" rid="t6fn1">&#x002A;&#x002A;&#x002A;</xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="left">The percentage fixation time (%)</td>
<td valign="top" align="center">0.246</td>
<td valign="top" align="center">1.0211</td>
<td valign="top" align="center">1.949</td>
<td valign="top" align="center">5.198</td>
<td valign="top" align="center">&#x2212;4.695</td>
<td valign="top" align="center">&#x2212;0.354</td>
<td valign="top" align="center">&#x003C; 0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">NK</td>
<td valign="top" align="left">Fixation count</td>
<td valign="top" align="center">0.364</td>
<td valign="top" align="center">0.774</td>
<td valign="top" align="center">0.523</td>
<td valign="top" align="center">0.913</td>
<td valign="top" align="center">&#x2212;2.019</td>
<td valign="top" align="center">&#x2212;0.152</td>
<td valign="top" align="center">0.043<xref ref-type="table-fn" rid="t6fn1">&#x002A;</xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="left">The percentage fixation time (%)</td>
<td valign="top" align="center">1.328</td>
<td valign="top" align="center">2.666</td>
<td valign="top" align="center">1.810</td>
<td valign="top" align="center">3.485</td>
<td valign="top" align="center">&#x2212;1.195</td>
<td valign="top" align="center">&#x2212;0.090</td>
<td valign="top" align="center">0.232</td>
</tr>
<tr>
<td valign="top" align="left">PO</td>
<td valign="top" align="left">Fixation count</td>
<td valign="top" align="center">0.858</td>
<td valign="top" align="center">1.194</td>
<td valign="top" align="center">0.267</td>
<td valign="top" align="center">0.661</td>
<td valign="top" align="center">&#x2212;5.684</td>
<td valign="top" align="center">&#x2212;0.428</td>
<td valign="top" align="center">&#x003C; 0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">The percentage fixation time (%)</td>
<td valign="top" align="center">4.380</td>
<td valign="top" align="center">6.875</td>
<td valign="top" align="center">0.895</td>
<td valign="top" align="center">2.580</td>
<td valign="top" align="center">&#x2212;6.466</td>
<td valign="top" align="center">&#x2212;0.487</td>
<td valign="top" align="center">&#x003C; 0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">PK</td>
<td valign="top" align="left">Fixation count</td>
<td valign="top" align="center">0.881</td>
<td valign="top" align="center">0.958</td>
<td valign="top" align="center">0.494</td>
<td valign="top" align="center">1.025</td>
<td valign="top" align="center">&#x2212;4.282</td>
<td valign="top" align="center">&#x2212;0.323</td>
<td valign="top" align="center">&#x003C; 0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">The percentage fixation time (%)</td>
<td valign="top" align="center">4.194</td>
<td valign="top" align="center">5.330</td>
<td valign="top" align="center">1.525</td>
<td valign="top" align="center">3.168</td>
<td valign="top" align="center">&#x2212;6.074</td>
<td valign="top" align="center">&#x2212;0.458</td>
<td valign="top" align="center">&#x003C; 0.001&#x002A;&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">EO</td>
<td valign="top" align="left">Fixation count</td>
<td valign="top" align="center">0.972</td>
<td valign="top" align="center">1.239</td>
<td valign="top" align="center">0.699</td>
<td valign="top" align="center">1.217</td>
<td valign="top" align="center">&#x2212;2.588</td>
<td valign="top" align="center">&#x2212;0.195</td>
<td valign="top" align="center">0.010&#x002A;</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">The percentage fixation time (%)</td>
<td valign="top" align="center">4.676</td>
<td valign="top" align="center">7.061</td>
<td valign="top" align="center">3.103</td>
<td valign="top" align="center">6.188</td>
<td valign="top" align="center">&#x2212;2.636</td>
<td valign="top" align="center">&#x2212;0.199</td>
<td valign="top" align="center">0.008&#x002A;&#x002A;</td>
</tr>
<tr>
<td valign="top" align="left">MO</td>
<td valign="top" align="left">Fixation count</td>
<td valign="top" align="center">2.954</td>
<td valign="top" align="center">2.866</td>
<td valign="top" align="center">3.500</td>
<td valign="top" align="center">3.046</td>
<td valign="top" align="center">&#x2212;1.975</td>
<td valign="top" align="center">&#x2212;0.149</td>
<td valign="top" align="center">0.048&#x002A;</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">The percentage fixation time (%)</td>
<td valign="top" align="center">14.587</td>
<td valign="top" align="center">12.393</td>
<td valign="top" align="center">13.085</td>
<td valign="top" align="center">11.482</td>
<td valign="top" align="center">&#x2212;0.170</td>
<td valign="top" align="center">&#x2212;0.013</td>
<td valign="top" align="center">0.242</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p><italic>AOI, area of interest; QT, the question text; NO, the extreme negative option; NK, the negative keyword; PO, the extreme positive keyword, PK, the positive keyword; EO, the extreme options; MO, the medium options.</italic></p></fn>
<fn id="t6fn1"><p><italic>Significance codes: &#x002A;&#x002A;&#x002A;<italic>p</italic> &#x003C; 0.001, &#x002A;&#x002A;<italic>p</italic> &#x003C; 0.01, &#x002A;<italic>p</italic> &#x003C; 0.05.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
<p>The heat map of lie and truth in Study 1 and Study 2 is shown in <xref ref-type="fig" rid="F4">Figure 4</xref>.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>The heat map of lying and honesty in Study 1 and Study 2.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-12-774961-g004.tif"/>
</fig>
</sec>
<sec id="S3.SS2.SSS6">
<title>Classification</title>
<p>We built a linear SVM, the kernel function is linear, the kernel scale is 1, the box constraint level is 2, and the multiclass method is one-vs-one.</p>
<p>Features were chosen from a previous study in classifying deception in personality tests (<xref ref-type="bibr" rid="B71">van Hooft and Born, 2012</xref>), such as fixation behaviors in AOIs. In Study 1 and Study 2, fixation behaviors and pupil size showed significant differences and consistent tendencies. Hence, this study also included them based on the results of Study 1 and Study 2. The feature groups pupil size, fixation behaviors, and fixation behaviors in AOIs presented the most viable features for classifying truths and lies, and were thus shown in our final classifier (see <xref ref-type="table" rid="T7">Table 7</xref>).</p>
<table-wrap position="float" id="T7">
<label>TABLE 7</label>
<caption><p>Feature groups and specific features for each group.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Feature group</td>
<td valign="top" align="left">Features</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Pupil size</td>
<td valign="top" align="left">Pupil diameter</td>
</tr>
<tr>
<td valign="top" align="left">Fixation behaviors</td>
<td valign="top" align="left">Fixation count, fixation frequency, fixation duration total, fixation duration average</td>
</tr>
<tr>
<td valign="top" align="left">Fixation behaviors in AOIs</td>
<td valign="top" align="left">Fixation count, fixation time percentage in the QT, NO, NK, PO, PK, EO, and MO areas</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p><italic>AOIs, areas of interest; QT, the question text; NO, the extreme negative option; NK, the negative keyword; PO, the extreme positive keyword, PK, the positive keyword; EO, the extreme options; MO, the medium options.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
<p>As reported in <xref ref-type="table" rid="T8">Table 8</xref>, using randomized five-fold cross-validation, this study obtained an accuracy of 74.09%. The rates of precision are 0.77 and 0.72 for lie and truth, respectively. The rates of recall are 0.71 and 0.78 for lie and truth, respectively. These produced an average F1-score of 0.74 for lie and 0.75 for truth. The ROC AUC of both lie and truth is 0.78. The scores showed the mean score from cross-validation.</p>
<table-wrap position="float" id="T8">
<label>TABLE 8</label>
<caption><p>Confusion matrix and performance metrics of classification results.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Accuracy (%)</td>
<td valign="top" align="center">74.09%</td>
<td valign="top" align="center" colspan="2">True label<hr/></td>
<td valign="top" align="center">Precision</td>
<td valign="top" align="center">F<sub>1</sub>-score</td>
<td valign="top" align="center">ROC AUC</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="center">Lie</td>
<td valign="top" align="center">Truth</td>
<td/>
<td/>
<td/>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Predicted label</td>
<td valign="top" align="center">Lie</td>
<td valign="top" align="center">130</td>
<td valign="top" align="center">39</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">Truth</td>
<td valign="top" align="center">54</td>
<td valign="top" align="center">136</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.75</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td valign="top" align="left">Recall</td>
<td/>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.78</td>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p><italic>ROC AUC, area under the curve of receiver operating characteristics.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
</sec>
<sec id="S3.SS3">
<title>Discussion</title>
<p>In Study 2, when lying, the participants had larger pupil size, greater fixation count, and duration, which supported the theory that cognitive load is higher in the dishonest condition. However, when the participants were lying, they showed lower count, velocity, and amplitude of saccade, contradicting the theory that deception increased cognitive load. There were no significant differences in blink behaviors in Study 2. The results of saccade and blink behaviors were inconsistent with Study 1.</p>
<p>According to the results based on AOIs analysis. When lying, participants paid less attention to the NO and NK areas, while more to the PO and PK areas. In general, it is consistent with mental processes of wanting to rate higher under the condition of real-name evaluation. Moreover, in the QT area, the percentage fixation time was higher when lying, indicating that cognitive load was higher when lying. What is more, participants focused more on the EO area when lying, and more on the MO area when honesty, which is in line with the study from <xref ref-type="bibr" rid="B71">van Hooft and Born (2012)</xref>.</p>
<p>The linear SVM classifier performed well over both chance level and human performance. The accuracy this study achieved showed that eye-tracking data could be a promising path toward deception detection.</p>
</sec>
</sec>
<sec id="S4">
<title>General Discussion</title>
<p>In two studies, we have investigated whether we can assess deception in questionnaire surveys with eye-tracking. Here, to explore if the differences of eye behaviors between lying and honesty were consistent and generalizable to reality, this research compared the results of Study 1 (instructed lying) and Study 2 (spontaneous lying).</p>
<p>The significance and tendency of data on pupil size and fixation across the two studies were all consistent. When lying, the participants had increased pupil size, fixation count, fixation duration. The pupil and fixation data in this study support the opinion that deception is more cognitively demanding. The elevated mental effort in lying can elicit a higher-than-normal level of nervousness and anxiety (<xref ref-type="bibr" rid="B26">Gonzales, 2018</xref>), and elevated mental effort leads to pupil dilation and more fixation behaviors. Consistent with previous studies, higher cognitive load caused by deception resulted in more fixation behaviors (<xref ref-type="bibr" rid="B14">Chen et al., 2011</xref>; <xref ref-type="bibr" rid="B86">Zagermann et al., 2016</xref>), and generated larger pupil size (<xref ref-type="bibr" rid="B79">Wang et al., 2010</xref>; <xref ref-type="bibr" rid="B15">Cook et al., 2012</xref>; <xref ref-type="bibr" rid="B55">Proudfoot et al., 2015</xref>, <xref ref-type="bibr" rid="B54">2016</xref>). Meanwhile, the larger pupil size when lying may also be caused by memory retrieval (<xref ref-type="bibr" rid="B49">Otero et al., 2011</xref>; <xref ref-type="bibr" rid="B37">Kucewicz et al., 2018</xref>), increased arousal (<xref ref-type="bibr" rid="B8">Bradshaw, 1967</xref>), etc.</p>
<p>According to the analysis of AOIs, both Study 1 and Study 2 showed that the participants focused more on positive items when lying, and focused more on the negative items when being honest. The analysis of the ratings revealed that participants gave higher ratings when lying, consequently, it is logical to focus more on positive items. Surprisingly, in Study 2, there were also significant differences in the QT, EO, and MO areas, but not in Study 1. Deception is a cognitively demanding task, which is reflected in increased percentage fixation time in the QT area when lying in Study 2, the participants needed more effort to read question text. Meanwhile, the participants paid more attention to the EO area and paid less attention to the MO area when lying. This may be since in Study 2, when participants spontaneously lied, the motivation to lie was stronger because of morality and fear of negative consequences if the teacher found out. In this condition, the participants may have made decisions directly based on social desirability without memory recall, resulting in a greater focus on the extreme options when lying, consistent with the previous study (<xref ref-type="bibr" rid="B71">van Hooft and Born, 2012</xref>).</p>
<p>The blink behaviors only showed significance in Study 1. The count, frequency, and total duration of blink were significantly greater when lying in Study 1. Previous studies showed that blink count correlated with deceit <xref ref-type="bibr" rid="B24">George et al. (2017)</xref>, <xref ref-type="bibr" rid="B7">Borza et al. (2018)</xref> found that dishonest caused increased blink count and duration. However, increasing blink count also means decreasing cognitive load, arousal level, attentional load (<xref ref-type="bibr" rid="B86">Zagermann et al., 2016</xref>; <xref ref-type="bibr" rid="B43">Maffei and Angrilli, 2018</xref>). The higher cognitive load in Study 2 may lead to decreased blink behaviors. The combined effect of these factors has contributed to the non-significance in blink behaviors of Study 2. The blink behaviors in deception conditions are complex and still need further research.</p>
<p>There were significant differences in saccade behaviors in both Study 1 and Study 2, whereas the tendency was the opposite. When lying, the saccade count, saccade velocity, and saccade amplitude were higher in Study 1 and were lower in Study 2. <xref ref-type="bibr" rid="B77">Vrij et al. (2015)</xref> found that saccade velocity was higher when lying. But in this research, the results of Study 1 and Study 2 are different. The differences may indicate that saccade is not a reliable predictor of deception in questionnaire surveys, which has also been mentioned in a previous Study by <xref ref-type="bibr" rid="B7">Borza et al. (2018)</xref>.</p>
<p>Based on the comparative analysis of Study 1 and Study 2, it can be concluded that the pupil size and fixation behaviors are more reliable and valid for lie detection in the actual situation. The pupil size and fixations are not restricted to explicitly instructed lying, but can also be observed for spontaneous lying. However, the blink and saccade behaviors showed different or even contradictory performances in Study 1 and Study 2, which still need further research. Results from two studies showed that the mental processes of instructed lying are likely not the same as the actual spontaneous one. These differences may be caused due to the more complex mental processes of deception in the actual situation. The findings expanded our understanding of the detection of the spontaneous lie. This study has identified eye movement predictors that remain valid in spontaneous lying.</p>
<p>The current study built a linear SVM classifier on eye-tracking data in Study 2. This study achieved an F<sub>1</sub>-score of 0.74 for lie detection, and an F<sub>1</sub>-score of 0.75 for truth detection. The eye behaviors can help to detect lie with an accuracy of 74.09%. The results of the experiment showed that eye behaviors are good predictors of deception in questionnaire research. Although <xref ref-type="bibr" rid="B71">van Hooft and Born (2012)</xref> achieved 82.9% accuracy in lie detection in the personality test questionnaire, the participants in the personality test were instructed to lie. In the actual situation, the mental processes of spontaneous deception are more complex, the indicators of eye behavior can be influenced by other emotions, such as arousal, fear, guilty. This may lead to the decline of accuracy. Despite this, the classifier in this study outperformed both chance level and human performance by a wide range. In addition to the previously verified lie detection in personality test questionnaire surveys, eye behaviors are also valid for detecting lies in questionnaire surveys that evaluate others. It showed the potential of eye-tracking technology for detecting lies in questionnaire surveys.</p>
<p>The present study contributes to understanding the relationship between deception and eye behaviors and provides a basis for detecting lies in questionnaire surveys. Eye-tracking can help improve the quality of the questionnaire. Before publishing the questionnaire, the publishers can test it with a small sample by asking respondents to fill it in on the eye-tracker. According to the eye-tracking data, the high-sensitivity questions can be found, and the reliability of the answers obtained for each question can be evaluated. There are some ways to reduce the incorrect results due to deliberate misreporting, such as changing the question wording and frame, increasing the respondent&#x2019;s privacy, etc. (<xref ref-type="bibr" rid="B66">Tourangeau and Yan, 2007</xref>; <xref ref-type="bibr" rid="B36">Krumpal, 2013</xref>). Using these methods, questionnaire publishers can modify the high-sensitivity questions or the survey mode, preventing respondents from lying because of social desirability factors. When referring to questionnaire survey results, eye-tracking can also be used to evaluate the reliability of the questionnaire to avoid using unreliable survey results.</p>
<p>The current study also has limitations that motivate future investigations. Despite we confirmed with participants several times to ensure that they had given honest answers in the honesty condition, and removed the data that they felt dishonest. However, participants still possibly have modified their answers to some extent due to social desirability or reluctance to admit lying, so this study still cannot guarantee complete honesty of the answers. Although the accuracy of lie detection by the classifier is much higher than human performance, it is still unable to support reliable binary lie classification. Moreover, whether the eye behavior indicators are still valid for lie detection in less sensitive questionnaires needs further study. Future studies can focus on more behavioral and implicit parameters to enhance lie detection accuracy in questionnaire surveys, such as electroencephalogram, face-reading, and mouse-tracking.</p>
</sec>
<sec sec-type="conclusion" id="S5">
<title>Conclusion</title>
<p>This study explored the feasibility of eye-tracking for lie detection in questionnaire surveys. The eye behaviors in instructed lying and spontaneous lying conditions were investigated separately. Compared to previous studies on lie detection in questionnaire surveys, this study incorporated spontaneous lies in the actual situation. Because the participants experienced more natural and complete mental processes, the eye-tracking data were more reliable.</p>
<p>Through the two studies, the following conclusions were drawn: Eye-tracking signatures of lying are not restricted to instructed deception, but are also applicable to spontaneous deception. Pupil size and fixation behaviors were found to be useful in identifying lies in questionnaire surveys, while blink and saccade behaviors were not. When lying, respondents have larger pupil size, higher fixation count and duration. The results also showed that respondents paid attention to different areas of the questionnaire when lying and when they were honest. Furthermore, the deception classifier based on eye behaviors obtained convincing classification rates (74.09%) of lies in the actual lie situation. Those findings can provide anticipatory help to questionnaire publishers.</p>
</sec>
<sec sec-type="data-availability" id="S6">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="S7">
<title>Ethics Statement</title>
<p>The studies involving human participants were reviewed and approved by Sichuan University. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="S8">
<title>Author Contributions</title>
<p>XF contributed to the conceptualization, investigation, methodology, formal analysis, wrote the manuscript, and revised the manuscript. YS contributed to the conceptualization, investigation, methodology, formal analysis, and revised the manuscript. XZ, XW, and XD contributed to the investigation and revised the manuscript. MW contributed to the conceptualization, investigation, methodology, resources, supervision, and revised the manuscript. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="s12">
<title>Funding</title>
<p>This work was supported by Yibin City (Program Number 00302053A2062).</p>
</sec>
<sec id="S10" sec-type="supplementary material"><title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fpsyg.2021.774961/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fpsyg.2021.774961/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Table_1.pdf" id="TS1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ask</surname> <given-names>K.</given-names></name> <name><surname>Calderon</surname> <given-names>S.</given-names></name> <name><surname>Mac Giolla</surname> <given-names>E.</given-names></name></person-group> (<year>2020</year>). <article-title>Human lie-detection performance: does random assignment versus self-selection of liars and truth-tellers matter?</article-title> <source><italic>J. Appl. Res. Mem. Cogn.</italic></source> <volume>9</volume> <fpage>128</fpage>&#x2013;<lpage>136</lpage>. <pub-id pub-id-type="doi">10.1016/j.jarmac.2019.10.002</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bear</surname> <given-names>G. G.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Glutting</surname> <given-names>J.</given-names></name> <name><surname>Huang</surname> <given-names>X.</given-names></name> <name><surname>He</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>W.</given-names></name><etal/></person-group> (<year>2014</year>). <article-title>Understanding teacher-student relationships, student-student relationships, and conduct problems in China and the United States.</article-title> <source><italic>Int. J. School Educ. Psychol.</italic></source> <volume>2</volume> <fpage>247</fpage>&#x2013;<lpage>260</lpage>. <pub-id pub-id-type="doi">10.1080/21683603.2014.883342</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Behroozi</surname> <given-names>M.</given-names></name> <name><surname>Lui</surname> <given-names>A.</given-names></name> <name><surname>Moore</surname> <given-names>I.</given-names></name> <name><surname>Ford</surname> <given-names>D.</given-names></name> <name><surname>Parnin</surname> <given-names>C.</given-names></name></person-group> (<year>2018</year>). &#x201C;<article-title>Dazed: measuring the cognitive load of solving technical interview problems at the whiteboard</article-title>,&#x201D; in <source><italic>Proceedings of the 40th International Conference on Software Engineering: New Ideas and Emerging Results ICSE-NIER &#x2019;18</italic></source>, (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>93</fpage>&#x2013;<lpage>96</lpage>. <pub-id pub-id-type="doi">10.1145/3183399.3183415</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berkovsky</surname> <given-names>S.</given-names></name> <name><surname>Taib</surname> <given-names>R.</given-names></name> <name><surname>Koprinska</surname> <given-names>I.</given-names></name> <name><surname>Wang</surname> <given-names>E.</given-names></name> <name><surname>Zeng</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name><etal/></person-group> (<year>2019</year>). &#x201C;<article-title>Detecting personality traits using eye-tracking data</article-title>,&#x201D; in <source><italic>Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems</italic></source>, (<publisher-loc>Glasgow Scotland</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1145/3290605.3300451</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bessonova</surname> <given-names>Y. V.</given-names></name> <name><surname>Oboznov</surname> <given-names>A. A.</given-names></name></person-group> (<year>2018</year>). &#x201C;<article-title>Eye movements and lie detection</article-title>,&#x201D; in <source><italic>Intelligent Human Systems Integration Advances in Intelligent Systems and Computing</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Karwowski</surname> <given-names>W.</given-names></name> <name><surname>Ahram</surname> <given-names>T.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>149</fpage>&#x2013;<lpage>155</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-73888-8_25</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bond</surname> <given-names>C. F.</given-names></name> <name><surname>DePaulo</surname> <given-names>B. M.</given-names></name></person-group> (<year>2006</year>). <article-title>Accuracy of deception judgments.</article-title> <source><italic>Pers. Soc. Psychol. Rev.</italic></source> <volume>10</volume> <fpage>214</fpage>&#x2013;<lpage>234</lpage>. <pub-id pub-id-type="doi">10.1207/s15327957pspr1003_2</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Borza</surname> <given-names>D.</given-names></name> <name><surname>Itu</surname> <given-names>R.</given-names></name> <name><surname>Danescu</surname> <given-names>R.</given-names></name></person-group> (<year>2018</year>). <article-title>In the eye of the deceiver: analyzing eye movements as a cue to deception.</article-title> <source><italic>J. Imaging</italic></source> <volume>4</volume>:<issue>120</issue>. <pub-id pub-id-type="doi">10.3390/jimaging4100120</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bradshaw</surname> <given-names>J.</given-names></name></person-group> (<year>1967</year>). <article-title>Pupil size as a measure of arousal during information processing.</article-title> <source><italic>Nature</italic></source> <volume>216</volume> <fpage>515</fpage>&#x2013;<lpage>516</lpage>. <pub-id pub-id-type="doi">10.1038/216515a0</pub-id> <pub-id pub-id-type="pmid">6057275</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bruneau</surname> <given-names>D.</given-names></name> <name><surname>Sasse</surname> <given-names>M. A.</given-names></name> <name><surname>McCarthy</surname> <given-names>J. D.</given-names></name></person-group> (<year>2002</year>). &#x201C;<article-title>The eyes never lie: the use of eyetracking data in HCI research</article-title>,&#x201D; in <source><italic>Proceedings of the CHI 2002: Conference on Human Factors in Computing Systems</italic></source>, (<publisher-loc>Minneapolis, MIN</publisher-loc>: <publisher-name>ACM</publisher-name>).</citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brunetti</surname> <given-names>D. G.</given-names></name> <name><surname>Schlottmann</surname> <given-names>R. S.</given-names></name> <name><surname>Scott</surname> <given-names>A. B.</given-names></name> <name><surname>Hollrah</surname> <given-names>J. L.</given-names></name></person-group> (<year>1998</year>). <article-title>Instructed faking and MMPI-2 response latencies: the potential for assessing response validity.</article-title> <source><italic>J. Clin. Psychol.</italic></source> <volume>54</volume> <fpage>143</fpage>&#x2013;<lpage>153</lpage>. <pub-id pub-id-type="doi">10.1002/(SICI)1097-4679(199802)54:2&#x003C;143::AID-JCLP3&#x003C;3.0.CO;2-T</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Byvatov</surname> <given-names>E.</given-names></name> <name><surname>Fechner</surname> <given-names>U.</given-names></name> <name><surname>Sadowski</surname> <given-names>J.</given-names></name> <name><surname>Schneider</surname> <given-names>G.</given-names></name></person-group> (<year>2003</year>). <article-title>Comparison of support vector machine and artificial neural network systems for drug/nondrug classification.</article-title> <source><italic>J. Chem. Inf. Comput. Sci.</italic></source> <volume>43</volume> <fpage>1882</fpage>&#x2013;<lpage>1889</lpage>. <pub-id pub-id-type="doi">10.1021/ci0341161</pub-id> <pub-id pub-id-type="pmid">14632437</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>F.</given-names></name> <name><surname>Ruiz</surname> <given-names>N.</given-names></name> <name><surname>Choi</surname> <given-names>E.</given-names></name> <name><surname>Epps</surname> <given-names>J.</given-names></name> <name><surname>Khawaja</surname> <given-names>M. A.</given-names></name> <name><surname>Taib</surname> <given-names>R.</given-names></name><etal/></person-group> (<year>2013</year>). <article-title>Multimodal behavior and interaction as indicators of cognitive load.</article-title> <source><italic>ACM Trans. Interact. Intell. Syst.</italic></source> <volume>2</volume>:<issue>22</issue>. <pub-id pub-id-type="doi">10.1145/2395123.2395127</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>L.</given-names></name> <name><surname>Asgari</surname> <given-names>M.</given-names></name> <name><surname>Gale</surname> <given-names>R.</given-names></name> <name><surname>Wild</surname> <given-names>K.</given-names></name> <name><surname>Dodge</surname> <given-names>H.</given-names></name> <name><surname>Kaye</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Improving the assessment of mild cognitive impairment in advanced age with a novel multi-feature automated speech and language analysis of verbal fluency.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>11</volume>:<issue>535</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2020.00535</pub-id> <pub-id pub-id-type="pmid">32328008</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>S.</given-names></name> <name><surname>Epps</surname> <given-names>J.</given-names></name> <name><surname>Ruiz</surname> <given-names>N.</given-names></name> <name><surname>Chen</surname> <given-names>F.</given-names></name></person-group> (<year>2011</year>). &#x201C;<article-title>Eye activity as a measure of human mental effort in HCI</article-title>,&#x201D; in <source><italic>Proceedings of the 16th International Conference on Intelligent User Interfaces IUI &#x2019;11</italic></source>, (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>315</fpage>&#x2013;<lpage>318</lpage>. <pub-id pub-id-type="doi">10.1145/1943403.1943454</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cook</surname> <given-names>A. E.</given-names></name> <name><surname>Hacker</surname> <given-names>D. J.</given-names></name> <name><surname>Webb</surname> <given-names>A. K.</given-names></name> <name><surname>Osher</surname> <given-names>D.</given-names></name> <name><surname>Kristjansson</surname> <given-names>S. D.</given-names></name> <name><surname>Woltz</surname> <given-names>D. J.</given-names></name><etal/></person-group> (<year>2012</year>). <article-title>Lyin&#x2019; eyes: ocular-motor measures of reading reveal deception.</article-title> <source><italic>J. Exp. Psycholo. Appl.</italic></source> <volume>18</volume> <fpage>301</fpage>&#x2013;<lpage>313</lpage>. <pub-id pub-id-type="doi">10.1037/a0028307</pub-id> <pub-id pub-id-type="pmid">22545928</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Coolican</surname> <given-names>H.</given-names></name></person-group> (<year>2014</year>). <source><italic>Research Methods and Statistics in Psychology</italic></source>, <edition>6th Edn</edition>. <publisher-loc>London</publisher-loc>: <publisher-name>Psychology Press</publisher-name>.</citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cortes</surname> <given-names>C.</given-names></name> <name><surname>Vapnik</surname> <given-names>V.</given-names></name></person-group> (<year>1995</year>). <article-title>Support-vector networks.</article-title> <source><italic>Mach. Learn.</italic></source> <volume>20</volume> <fpage>273</fpage>&#x2013;<lpage>297</lpage>. <pub-id pub-id-type="doi">10.1007/BF00994018</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dalrymple</surname> <given-names>K. A.</given-names></name> <name><surname>Jiang</surname> <given-names>M.</given-names></name> <name><surname>Zhao</surname> <given-names>Q.</given-names></name> <name><surname>Elison</surname> <given-names>J. T.</given-names></name></person-group> (<year>2019</year>). <article-title>Machine learning accurately classifies age of toddlers based on eye tracking.</article-title> <source><italic>Sci. Rep.</italic></source> <volume>9</volume>:<issue>6255</issue>. <pub-id pub-id-type="doi">10.1038/s41598-019-42764-z</pub-id> <pub-id pub-id-type="pmid">31000762</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delgado-Herrera</surname> <given-names>M.</given-names></name> <name><surname>Reyes-Aguilar</surname> <given-names>A.</given-names></name> <name><surname>Giordano</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>What deception tasks used in the lab really do: systematic review and meta-analysis of ecological validity of fMRI deception tasks.</article-title> <source><italic>Neuroscience</italic></source> <volume>468</volume> <fpage>88</fpage>&#x2013;<lpage>109</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroscience.2021.06.005</pub-id> <pub-id pub-id-type="pmid">34111448</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>DePaulo</surname> <given-names>B. M.</given-names></name> <name><surname>Lindsay</surname> <given-names>J. J.</given-names></name> <name><surname>Malone</surname> <given-names>B. E.</given-names></name> <name><surname>Muhlenbruck</surname> <given-names>L.</given-names></name> <name><surname>Charlton</surname> <given-names>K.</given-names></name> <name><surname>Cooper</surname> <given-names>H.</given-names></name></person-group> (<year>2003</year>). <article-title>Cues to deception.</article-title> <source><italic>Psychol. Bull.</italic></source> <volume>129</volume> <fpage>74</fpage>&#x2013;<lpage>118</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.129.1.74</pub-id> <pub-id pub-id-type="pmid">12555795</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Einh&#x00E4;user</surname> <given-names>W.</given-names></name></person-group> (<year>2017</year>). &#x201C;<article-title>The pupil as marker of cognitive processes</article-title>,&#x201D; in <source><italic>Computational and Cognitive Neuroscience of Vision</italic></source>, <role>ed.</role> <person-group person-group-type="editor"><name><surname>Zhao</surname> <given-names>Q.</given-names></name></person-group> (<publisher-loc>Singapore</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>141</fpage>&#x2013;<lpage>169</lpage>. <pub-id pub-id-type="doi">10.1007/978-981-10-0213-7_7</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fritz</surname> <given-names>C.</given-names></name> <name><surname>Morris</surname> <given-names>P.</given-names></name> <name><surname>Richler</surname> <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Effect size estimates: current use, calculations, and interpretation.</article-title> <source><italic>J. Exp. Psychol. Gen.</italic></source> <volume>141</volume> <fpage>2</fpage>&#x2013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1037/a0024338</pub-id> <pub-id pub-id-type="pmid">21823805</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ganis</surname> <given-names>G.</given-names></name> <name><surname>Kosslyn</surname> <given-names>S. M.</given-names></name> <name><surname>Stose</surname> <given-names>S.</given-names></name> <name><surname>Thompson</surname> <given-names>W. L.</given-names></name> <name><surname>Yurgelun-Todd</surname> <given-names>D. A.</given-names></name></person-group> (<year>2003</year>). <article-title>Neural correlates of different types of deception: an fMRI investigation.</article-title> <source><italic>Cereb. Cortex</italic></source> <volume>13</volume> <fpage>830</fpage>&#x2013;<lpage>836</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/13.8.830</pub-id> <pub-id pub-id-type="pmid">12853369</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>George</surname> <given-names>S.</given-names></name> <name><surname>Manohara Pai</surname> <given-names>M. M.</given-names></name> <name><surname>Pai</surname> <given-names>R. M.</given-names></name> <name><surname>Praharaj</surname> <given-names>S. K.</given-names></name></person-group> (<year>2017</year>). &#x201C;<article-title>Eye blink count and eye blink duration analysis for deception detection</article-title>,&#x201D; in <source><italic>Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI)</italic></source>, (<publisher-loc>Udupi</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>223</fpage>&#x2013;<lpage>229</lpage>. <pub-id pub-id-type="doi">10.1109/ICACCI.2017.8125844</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Geven</surname> <given-names>L. M.</given-names></name> <name><surname>klein Selle</surname> <given-names>N.</given-names></name> <name><surname>Ben-Shakhar</surname> <given-names>G.</given-names></name> <name><surname>Kindt</surname> <given-names>M.</given-names></name> <name><surname>Verschuere</surname> <given-names>B.</given-names></name></person-group> (<year>2018</year>). <article-title>Self-initiated versus instructed cheating in the physiological concealed information test.</article-title> <source><italic>Biol. Psychol.</italic></source> <volume>138</volume> <fpage>146</fpage>&#x2013;<lpage>155</lpage>. <pub-id pub-id-type="doi">10.1016/j.biopsycho.2018.09.005</pub-id> <pub-id pub-id-type="pmid">30236614</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gonzales</surname> <given-names>V.</given-names></name></person-group> (<year>2018</year>). <source><italic>Exploring Pupil Diameter as a Lie Detection Method.</italic></source> Available online at: <ext-link ext-link-type="uri" xlink:href="https://diginole.lib.fsu.edu/islandora/object/fsu%3A621049/">https://diginole.lib.fsu.edu/islandora/object/fsu%3A621049/</ext-link> <comment>(accessed August 7, 2021)</comment>.</citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ho</surname> <given-names>D. Y. F.</given-names></name> <name><surname>Ho</surname> <given-names>R. T. H.</given-names></name></person-group> (<year>2008</year>). <article-title>Knowledge is a dangerous thing: authority relations, ideological conservatism, and creativity in confucian-heritage cultures.</article-title> <source><italic>J. Theor. Soc. Behav.</italic></source> <volume>38</volume> <fpage>67</fpage>&#x2013;<lpage>86</lpage>. <pub-id pub-id-type="doi">10.1111/j.1468-5914.2008.00357.x</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holden</surname> <given-names>R. R.</given-names></name> <name><surname>Fekken</surname> <given-names>G. C.</given-names></name> <name><surname>Jackson</surname> <given-names>D. N.</given-names></name></person-group> (<year>1985</year>). <article-title>Structured personality test item characteristics and validity.</article-title> <source><italic>J. Res. Pers.</italic></source> <volume>19</volume> <fpage>386</fpage>&#x2013;<lpage>394</lpage>. <pub-id pub-id-type="doi">10.1016/0092-6566(85)90007-8</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holden</surname> <given-names>R. R.</given-names></name> <name><surname>Wood</surname> <given-names>L. L.</given-names></name> <name><surname>Tomashewski</surname> <given-names>L.</given-names></name></person-group> (<year>2001</year>). <article-title>Do response time limitations counteract the effect of faking on personality inventory validity?</article-title> <source><italic>J. Pers. Soc. Psychol.</italic></source> <volume>81</volume> <fpage>160</fpage>&#x2013;<lpage>169</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.81.1.160</pub-id> <pub-id pub-id-type="pmid">11474721</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holtgraves</surname> <given-names>T.</given-names></name></person-group> (<year>2004</year>). <article-title>Social desirability and self-reports: testing models of socially desirable responding.</article-title> <source><italic>Pers. Soc. Psychol. Bull.</italic></source> <volume>30</volume> <fpage>161</fpage>&#x2013;<lpage>172</lpage>. <pub-id pub-id-type="doi">10.1177/0146167203259930</pub-id> <pub-id pub-id-type="pmid">15030631</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>C.-M.</given-names></name> <name><surname>Andrist</surname> <given-names>S.</given-names></name> <name><surname>Saupp&#x00E9;</surname> <given-names>A.</given-names></name> <name><surname>Mutlu</surname> <given-names>B.</given-names></name></person-group> (<year>2015</year>). <article-title>Using gaze patterns to predict task intent in collaboration.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>6</volume>:<issue>1049</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2015.01049</pub-id> <pub-id pub-id-type="pmid">26257694</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hui</surname> <given-names>E. K. P.</given-names></name> <name><surname>Sun</surname> <given-names>R. C. F.</given-names></name> <name><surname>Chow</surname> <given-names>S. S.-Y.</given-names></name> <name><surname>Chu</surname> <given-names>M. H.-T.</given-names></name></person-group> (<year>2011</year>). <article-title>Explaining Chinese students&#x2019; academic motivation: filial piety and self-determination.</article-title> <source><italic>Educ. Psychol.</italic></source> <volume>31</volume> <fpage>377</fpage>&#x2013;<lpage>392</lpage>. <pub-id pub-id-type="doi">10.1080/01443410.2011.559309</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kang</surname> <given-names>J.</given-names></name> <name><surname>Han</surname> <given-names>X.</given-names></name> <name><surname>Song</surname> <given-names>J.</given-names></name> <name><surname>Niu</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name></person-group> (<year>2020</year>). <article-title>The identification of children with autism spectrum disorder by SVM approach on EEG and eye-tracking data.</article-title> <source><italic>Comput. Biol. Med.</italic></source> <volume>120</volume>:<issue>103722</issue>. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2020.103722</pub-id> <pub-id pub-id-type="pmid">32250854</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Keskin</surname> <given-names>M.</given-names></name> <name><surname>Ooms</surname> <given-names>K.</given-names></name> <name><surname>Dogru</surname> <given-names>A. O.</given-names></name> <name><surname>De Maeyer</surname> <given-names>P.</given-names></name></person-group> (<year>2019</year>). <article-title>EEG &#x0026; eye tracking user experiments for spatial memory task on maps.</article-title> <source><italic>ISPRS Int. J. Geo-Inf.</italic></source> <volume>8</volume>:<issue>546</issue>. <pub-id pub-id-type="doi">10.3390/ijgi8120546</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Keskin</surname> <given-names>M.</given-names></name> <name><surname>Ooms</surname> <given-names>K.</given-names></name> <name><surname>Dogru</surname> <given-names>A. O.</given-names></name> <name><surname>De Maeyer</surname> <given-names>P.</given-names></name></person-group> (<year>2020</year>). <article-title>Exploring the cognitive load of expert and novice map users using EEG and eye tracking.</article-title> <source><italic>ISPRS Int. J. Geo Inf.</italic></source> <volume>9</volume>:<issue>429</issue>. <pub-id pub-id-type="doi">10.3390/ijgi9070429</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krumpal</surname> <given-names>I.</given-names></name></person-group> (<year>2013</year>). <article-title>Determinants of social desirability bias in sensitive surveys: a literature review.</article-title> <source><italic>Qual. Quant.</italic></source> <volume>47</volume> <fpage>2025</fpage>&#x2013;<lpage>2047</lpage>. <pub-id pub-id-type="doi">10.1007/s11135-011-9640-9</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kucewicz</surname> <given-names>M. T.</given-names></name> <name><surname>Dolezal</surname> <given-names>J.</given-names></name> <name><surname>Kremen</surname> <given-names>V.</given-names></name> <name><surname>Berry</surname> <given-names>B. M.</given-names></name> <name><surname>Miller</surname> <given-names>L. R.</given-names></name> <name><surname>Magee</surname> <given-names>A. L.</given-names></name><etal/></person-group> (<year>2018</year>). <article-title>Pupil size reflects successful encoding and recall of memory in humans.</article-title> <source><italic>Sci. Rep.</italic></source> <volume>8</volume>:<issue>4949</issue>. <pub-id pub-id-type="doi">10.1038/s41598-018-23197-6</pub-id> <pub-id pub-id-type="pmid">29563536</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leary</surname> <given-names>M. R.</given-names></name> <name><surname>Kowalski</surname> <given-names>R. M.</given-names></name></person-group> (<year>1990</year>). <article-title>Impression management: a literature review and two-component model.</article-title> <source><italic>Psychol. Bull.</italic></source> <volume>107</volume>:<issue>34</issue>.</citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>J.</given-names></name> <name><surname>Ahn</surname> <given-names>J.-H.</given-names></name></person-group> (<year>2012</year>). <article-title>Attention to banner ads and their effectiveness: an eye-tracking approach.</article-title> <source><italic>Int. J. Electron. Comm.</italic></source> <volume>17</volume> <fpage>119</fpage>&#x2013;<lpage>137</lpage>. <pub-id pub-id-type="doi">10.2753/JEC1086-4415170105</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lensvelt-Mulders</surname> <given-names>G.</given-names></name></person-group> (<year>2008</year>). &#x201C;<article-title>Surveying sensitive topics</article-title>,&#x201D; in <source><italic>The International Handbook of Survey Methodology</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>de Leeuw</surname> <given-names>E. D.</given-names></name> <name><surname>Hox</surname> <given-names>J. J.</given-names></name> <name><surname>Dillman</surname> <given-names>D. A.</given-names></name></person-group> (<publisher-loc>Mahwah, NJ</publisher-loc>: <publisher-name>Erlbaum</publisher-name>).</citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levine</surname> <given-names>T. R.</given-names></name></person-group> (<year>2018</year>). <article-title>Ecological validity and deception detection research design.</article-title> <source><italic>Commun. Methods Meas.</italic></source> <volume>12</volume> <fpage>45</fpage>&#x2013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1080/19312458.2017.1411471</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lim</surname> <given-names>J. Z.</given-names></name> <name><surname>Mountstephens</surname> <given-names>J.</given-names></name> <name><surname>Teo</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Emotion recognition using eye-tracking: taxonomy, review and current challenges.</article-title> <source><italic>Sensors</italic></source> <volume>20</volume>:<issue>2384</issue>. <pub-id pub-id-type="doi">10.3390/s20082384</pub-id> <pub-id pub-id-type="pmid">32331327</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maffei</surname> <given-names>A.</given-names></name> <name><surname>Angrilli</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>Spontaneous eye blink rate: an index of dopaminergic component of sustained attention and fatigue.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>123</volume> <fpage>58</fpage>&#x2013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2017.11.009</pub-id> <pub-id pub-id-type="pmid">29133149</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mathur</surname> <given-names>L.</given-names></name> <name><surname>Matari&#x0107;</surname> <given-names>M. J.</given-names></name></person-group> (<year>2020</year>). &#x201C;<article-title>Introducing representations of facial affect in automated multimodal deception detection</article-title>,&#x201D; in <source><italic>Proceedings of the 2020 International Conference on Multimodal Interaction ICMI &#x2019;20</italic></source>, (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>305</fpage>&#x2013;<lpage>314</lpage>. <pub-id pub-id-type="doi">10.1145/3382507.3418864</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mazza</surname> <given-names>C.</given-names></name> <name><surname>Monaro</surname> <given-names>M.</given-names></name> <name><surname>Burla</surname> <given-names>F.</given-names></name> <name><surname>Colasanti</surname> <given-names>M.</given-names></name> <name><surname>Orr&#x00F9;</surname> <given-names>G.</given-names></name> <name><surname>Ferracuti</surname> <given-names>S.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>Use of mouse-tracking software to detect faking-good behavior on personality questionnaires: an explorative study.</article-title> <source><italic>Sci. Rep.</italic></source> <volume>10</volume>:<issue>4835</issue>. <pub-id pub-id-type="doi">10.1038/s41598-020-61636-5</pub-id> <pub-id pub-id-type="pmid">32179844</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McFarland</surname> <given-names>L. A.</given-names></name> <name><surname>Ryan</surname> <given-names>A. M.</given-names></name></person-group> (<year>2000</year>). <article-title>Variance in faking across noncognitive measures.</article-title> <source><italic>J. Appl. Psychol.</italic></source> <volume>85</volume> <fpage>812</fpage>&#x2013;<lpage>821</lpage>. <pub-id pub-id-type="doi">10.1037/0021-9010.85.5.812</pub-id> <pub-id pub-id-type="pmid">11055152</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mottelson</surname> <given-names>A.</given-names></name> <name><surname>Knibbe</surname> <given-names>J.</given-names></name> <name><surname>Hornb&#x00E6;k</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). &#x201C;<article-title>Veritaps: truth estimation from mobile interaction</article-title>,&#x201D; in <source><italic>Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems</italic></source>, (<publisher-loc>Montreal QC</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1145/3173574.3174135</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nahari</surname> <given-names>G.</given-names></name> <name><surname>Nisin</surname> <given-names>Z.</given-names></name></person-group> (<year>2019</year>). <article-title>Digging further into the speech of liars: future research prospects in verbal lie detection.</article-title> <source><italic>Front. Psychiatry</italic></source> <volume>10</volume>:<issue>56</issue>. <pub-id pub-id-type="doi">10.3389/fpsyt.2019.00056</pub-id> <pub-id pub-id-type="pmid">30858807</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Otero</surname> <given-names>S. C.</given-names></name> <name><surname>Weekes</surname> <given-names>B. S.</given-names></name> <name><surname>Hutton</surname> <given-names>S. B.</given-names></name></person-group> (<year>2011</year>). <article-title>Pupil size changes during recognition memory.</article-title> <source><italic>Psychophysiology</italic></source> <volume>48</volume> <fpage>1346</fpage>&#x2013;<lpage>1353</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2011.01217.x</pub-id> <pub-id pub-id-type="pmid">21575007</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peltier</surname> <given-names>S. J.</given-names></name> <name><surname>Lisinski</surname> <given-names>J. M.</given-names></name> <name><surname>Noll</surname> <given-names>D. C.</given-names></name> <name><surname>LaConte</surname> <given-names>S. M.</given-names></name></person-group> (<year>2009</year>). <article-title>Support vector machine classification of complex fMRI data.</article-title> <source><italic>Annu. Int. Conf. IEEE Eng. Med. Biol. Soc.</italic></source> <volume>2009</volume> <fpage>5381</fpage>&#x2013;<lpage>5384</lpage>. <pub-id pub-id-type="doi">10.1109/IEMBS.2009.5332805</pub-id> <pub-id pub-id-type="pmid">19963901</pub-id></citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perkhofer</surname> <given-names>L.</given-names></name> <name><surname>Lehner</surname> <given-names>O.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Using gaze behavior to measure cognitive load</article-title>,&#x201D; in <source><italic>Information Systems and Neuroscience</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Davis</surname> <given-names>F. D.</given-names></name> <name><surname>Riedl</surname> <given-names>R.</given-names></name> <name><surname>vom Brocke</surname> <given-names>J.</given-names></name> <name><surname>L&#x00E9;ger</surname> <given-names>P.-M.</given-names></name> <name><surname>Randolph</surname> <given-names>A. B.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>73</fpage>&#x2013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-01087-4_9</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Porter</surname> <given-names>S.</given-names></name> <name><surname>ten Brinke</surname> <given-names>L.</given-names></name></person-group> (<year>2010</year>). <article-title>The truth about lies: what works in detecting high-stakes deception?</article-title> <source><italic>Legal Criminol. Psychol.</italic></source> <volume>15</volume> <fpage>57</fpage>&#x2013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.1348/135532509X433151</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Preisend&#x00F6;rfer</surname> <given-names>P.</given-names></name> <name><surname>Wolter</surname> <given-names>F.</given-names></name></person-group> (<year>2014</year>). <article-title>Who is telling the truth? A validation study on determinants of response behavior in surveys.</article-title> <source><italic>Public Opin. Q.</italic></source> <volume>78</volume> <fpage>126</fpage>&#x2013;<lpage>146</lpage>. <pub-id pub-id-type="doi">10.1093/poq/nft079</pub-id></citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Proudfoot</surname> <given-names>J.</given-names></name> <name><surname>Jenkins</surname> <given-names>J.</given-names></name> <name><surname>Burgoon</surname> <given-names>J.</given-names></name> <name><surname>Jr</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>More than meets the eye: how oculometric behaviors evolve over the course of automated deception detection interactions.</article-title> <source><italic>J. Manage. Inform. Syst.</italic></source> <volume>33</volume> <fpage>332</fpage>&#x2013;<lpage>360</lpage>. <pub-id pub-id-type="doi">10.1080/07421222.2016.1205929</pub-id></citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Proudfoot</surname> <given-names>J. G.</given-names></name> <name><surname>Jenkins</surname> <given-names>J. L.</given-names></name> <name><surname>Burgoon</surname> <given-names>J. K.</given-names></name> <name><surname>Nunamaker</surname> <given-names>J. F.</given-names></name></person-group> (<year>2015</year>). &#x201C;<article-title>Deception is in the eye of the communicator: investigating pupil diameter variations in automated deception detection interviews</article-title>,&#x201D; in <source><italic>Proceedings of the 2015 IEEE International Conference on Intelligence and Security Informatics (ISI)</italic></source>, (<publisher-loc>Baltimore, MD</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>97</fpage>&#x2013;<lpage>102</lpage>. <pub-id pub-id-type="doi">10.1109/ISI.2015.7165946</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rauhut</surname> <given-names>H.</given-names></name> <name><surname>Krumpal</surname> <given-names>I.</given-names></name></person-group> (<year>2008</year>). <article-title>Die durchsetzung sozialer normen in low-cost und high-cost situationen / enforcement of social norms in low-cost and high-cost situations.</article-title> <source><italic>Zeitschr. Soziol.</italic></source> <volume>37</volume> <fpage>380</fpage>&#x2013;<lpage>402</lpage>. <pub-id pub-id-type="doi">10.1515/zfsoz-2008-0502</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Roma</surname> <given-names>P.</given-names></name> <name><surname>Verrocchio</surname> <given-names>M. C.</given-names></name> <name><surname>Mazza</surname> <given-names>C.</given-names></name> <name><surname>Marchetti</surname> <given-names>D.</given-names></name> <name><surname>Burla</surname> <given-names>F.</given-names></name> <name><surname>Cinti</surname> <given-names>M. E.</given-names></name><etal/></person-group> (<year>2018</year>). <article-title>Could time detect a faking-good attitude? A study with the MMPI-2-RF.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>9</volume>:<issue>1064</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2018.01064</pub-id> <pub-id pub-id-type="pmid">30090076</pub-id></citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rudmann</surname> <given-names>D. S.</given-names></name> <name><surname>McConkie</surname> <given-names>G. W.</given-names></name> <name><surname>Zheng</surname> <given-names>X. S.</given-names></name></person-group> (<year>2003</year>). &#x201C;<article-title>Eyetracking in cognitive state detection for HCI</article-title>,&#x201D; in <source><italic>Proceedings of the 5th International Conference on Multimodal interfaces ICMI &#x2019;03</italic></source>, (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>159</fpage>&#x2013;<lpage>163</lpage>. <pub-id pub-id-type="doi">10.1145/958432.958464</pub-id></citation></ref>
<ref id="B59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Steil</surname> <given-names>J.</given-names></name> <name><surname>Koelle</surname> <given-names>M.</given-names></name> <name><surname>Heuten</surname> <given-names>W.</given-names></name> <name><surname>Boll</surname> <given-names>S.</given-names></name> <name><surname>Bulling</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>PrivacEye: privacy-preserving head-mounted eye tracking using egocentric scene image and eye movement features</article-title>,&#x201D; in <source><italic>Proceedings of the 11th ACM Symposium on Eye Tracking Research &#x0026; Applications ETRA &#x2019;19</italic></source>, (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1145/3314111.3319913</pub-id></citation></ref>
<ref id="B60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sudman</surname> <given-names>S.</given-names></name> <name><surname>Bradburn</surname> <given-names>N.</given-names></name> <name><surname>Schwarz</surname> <given-names>N.</given-names></name> <name><surname>Gullickson</surname> <given-names>T.</given-names></name></person-group> (<year>1997</year>). <article-title>Thinking about answers: the application of cognitive processes to survey methodology.</article-title> <source><italic>Psyccritiques</italic></source> <volume>42</volume> <fpage>652</fpage>&#x2013;<lpage>652</lpage>. <pub-id pub-id-type="doi">10.1037/000266</pub-id></citation></ref>
<ref id="B61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Synnott</surname> <given-names>J.</given-names></name> <name><surname>Dietzel</surname> <given-names>D.</given-names></name> <name><surname>Ioannou</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>A review of the polygraph: history, methodology and current status.</article-title> <source><italic>Crime Psychol. Rev.</italic></source> <volume>1</volume> <fpage>59</fpage>&#x2013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1080/23744006.2015.1060080</pub-id></citation></ref>
<ref id="B62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Taherdoost</surname> <given-names>H.</given-names></name></person-group> (<year>2016</year>). <source><italic>Validity and Reliability of the Research Instrument; How to Test the Validation of a Questionnaire/Survey in a Research.</italic></source> <publisher-loc>Rochester, NY</publisher-loc>: <publisher-name>Social Science Research Network</publisher-name>, <pub-id pub-id-type="doi">10.2139/ssrn.3205040</pub-id></citation></ref>
<ref id="B63"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tomczak</surname> <given-names>M.</given-names></name> <name><surname>Tomczak</surname> <given-names>E.</given-names></name></person-group> (<year>2014</year>). <article-title>The need to report effect size estimates revisited. An overview of some recommended measures of effect size.</article-title> <source><italic>Trends Sport Sci.</italic></source> <volume>21</volume> <fpage>19</fpage>&#x2013;<lpage>25</lpage>.</citation></ref>
<ref id="B64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tourangeau</surname> <given-names>R.</given-names></name> <name><surname>Rasinski</surname> <given-names>K. A.</given-names></name></person-group> (<year>1988</year>). <article-title>Cognitive processes underlying context effects in attitude measurement.</article-title> <source><italic>Psychol. Bull.</italic></source> <volume>103</volume> <fpage>299</fpage>&#x2013;<lpage>314</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.103.3.299</pub-id></citation></ref>
<ref id="B65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tourangeau</surname> <given-names>R.</given-names></name> <name><surname>Rips</surname> <given-names>L. J.</given-names></name> <name><surname>Rasinski</surname> <given-names>K.</given-names></name></person-group> (<year>2000</year>). <source><italic>The Psychology of Survey Response.</italic></source> <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation></ref>
<ref id="B66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tourangeau</surname> <given-names>R.</given-names></name> <name><surname>Yan</surname> <given-names>T.</given-names></name></person-group> (<year>2007</year>). <article-title>Sensitive questions in surveys.</article-title> <source><italic>Psychol. Bull.</italic></source> <volume>133</volume> <fpage>859</fpage>&#x2013;<lpage>883</lpage>. <pub-id pub-id-type="doi">10.1037/0033-2909.133.5.859</pub-id> <pub-id pub-id-type="pmid">17723033</pub-id></citation></ref>
<ref id="B67"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tsai</surname> <given-names>M.-J.</given-names></name> <name><surname>Hou</surname> <given-names>H.-T.</given-names></name> <name><surname>Lai</surname> <given-names>M.-L.</given-names></name> <name><surname>Liu</surname> <given-names>W.-Y.</given-names></name> <name><surname>Yang</surname> <given-names>F.-Y.</given-names></name></person-group> (<year>2012</year>). <article-title>Visual attention for solving multiple-choice science problem: an eye-tracking analysis.</article-title> <source><italic>Comput. Educ.</italic></source> <volume>58</volume> <fpage>375</fpage>&#x2013;<lpage>385</lpage>. <pub-id pub-id-type="doi">10.1016/j.compedu.2011.07.012</pub-id></citation></ref>
<ref id="B68"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Twyman</surname> <given-names>N.</given-names></name> <name><surname>Schuetzler</surname> <given-names>R. M.</given-names></name> <name><surname>Proudfoot</surname> <given-names>J. G.</given-names></name> <name><surname>Elkins</surname> <given-names>A.</given-names></name></person-group> (<year>2013</year>). &#x201C;<article-title>A systems approach to countermeasures in credibility assessment interviews</article-title>,&#x201D; in <source><italic>Proceedings of the International Conference on Information Systems (ICIS 2013)</italic></source>, <volume>Vol. 20</volume> (<publisher-loc>Milan</publisher-loc>).</citation></ref>
<ref id="B69"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Twyman</surname> <given-names>N. W.</given-names></name> <name><surname>Lowry</surname> <given-names>P. B.</given-names></name> <name><surname>Burgoon</surname> <given-names>J. K.</given-names></name> <name><surname>Jr</surname> <given-names>J. F. N.</given-names></name></person-group> (<year>2014</year>). <article-title>Autonomous scientifically controlled screening systems for detecting information purposely concealed by individuals.</article-title> <source><italic>J. Manage. Inform. Syst.</italic></source> <volume>31</volume> <fpage>106</fpage>&#x2013;<lpage>137</lpage>. <pub-id pub-id-type="doi">10.1080/07421222.2014.995535</pub-id></citation></ref>
<ref id="B70"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>van der Wel</surname> <given-names>P.</given-names></name> <name><surname>van Steenbergen</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>Pupil dilation as an index of effort in cognitive control tasks: a review.</article-title> <source><italic>Psychon. Bull. Rev.</italic></source> <volume>25</volume> <fpage>2005</fpage>&#x2013;<lpage>2015</lpage>. <pub-id pub-id-type="doi">10.3758/s13423-018-1432-y</pub-id> <pub-id pub-id-type="pmid">29435963</pub-id></citation></ref>
<ref id="B71"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>van Hooft</surname> <given-names>E. A. J.</given-names></name> <name><surname>Born</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <article-title>Intentional response distortion on personality tests: Using eye-tracking to understand response processes when faking.</article-title> <source><italic>J. Appl. Psychol.</italic></source> <volume>97</volume> <fpage>301</fpage>&#x2013;<lpage>316</lpage>. <pub-id pub-id-type="doi">10.1037/a0025711</pub-id> <pub-id pub-id-type="pmid">21967296</pub-id></citation></ref>
<ref id="B72"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>von Hippel</surname> <given-names>W.</given-names></name> <name><surname>Trivers</surname> <given-names>R.</given-names></name></person-group> (<year>2011</year>). <article-title>The evolution and psychology of self-deception.</article-title> <source><italic>Behav. Brain Sci.</italic></source> <volume>34</volume> <fpage>1</fpage>&#x2013;<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1017/S0140525X10001354</pub-id> <pub-id pub-id-type="pmid">21288379</pub-id></citation></ref>
<ref id="B73"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vrij</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). &#x201C;<article-title>Verbal lie detection tools from an applied perspective</article-title>,&#x201D; in <source><italic>Detecting Concealed Information and Deception</italic></source>, <role>ed.</role> <person-group person-group-type="editor"><name><surname>Rosenfeld</surname> <given-names>J. P.</given-names></name></person-group> (<publisher-loc>Amsterdam</publisher-loc>: <publisher-name>Elsevier</publisher-name>), <fpage>297</fpage>&#x2013;<lpage>327</lpage>. <pub-id pub-id-type="doi">10.1016/B978-0-12-812729-2.00013-6</pub-id></citation></ref>
<ref id="B74"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vrij</surname> <given-names>A.</given-names></name> <name><surname>Edward</surname> <given-names>K.</given-names></name> <name><surname>Bull</surname> <given-names>R.</given-names></name></person-group> (<year>2001</year>). <article-title>Stereotypical verbal and nonverbal responses while deceiving others.</article-title> <source><italic>Pers. Soc. Psychol. Bull.</italic></source> <volume>27</volume> <fpage>899</fpage>&#x2013;<lpage>909</lpage>. <pub-id pub-id-type="doi">10.1177/0146167201277012</pub-id></citation></ref>
<ref id="B75"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vrij</surname> <given-names>A.</given-names></name> <name><surname>Fisher</surname> <given-names>R. P.</given-names></name></person-group> (<year>2016</year>). <article-title>Which lie detection tools are ready for use in the criminal justice system?</article-title> <source><italic>J. Appl. Res. Mem. Cogn.</italic></source> <volume>5</volume> <fpage>302</fpage>&#x2013;<lpage>307</lpage>. <pub-id pub-id-type="doi">10.1016/j.jarmac.2016.06.014</pub-id></citation></ref>
<ref id="B76"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vrij</surname> <given-names>A.</given-names></name> <name><surname>Fisher</surname> <given-names>R. P.</given-names></name> <name><surname>Blank</surname> <given-names>H.</given-names></name></person-group> (<year>2017</year>). <article-title>A cognitive approach to lie detection: a meta-analysis.</article-title> <source><italic>Legal Criminol. Psychol.</italic></source> <volume>22</volume> <fpage>1</fpage>&#x2013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1111/lcrp.12088</pub-id></citation></ref>
<ref id="B77"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vrij</surname> <given-names>A.</given-names></name> <name><surname>Oliveira</surname> <given-names>J.</given-names></name> <name><surname>Hammond</surname> <given-names>A.</given-names></name> <name><surname>Ehrlichman</surname> <given-names>H.</given-names></name></person-group> (<year>2015</year>). <article-title>Saccadic eye movement rate as a cue to deceit.</article-title> <source><italic>J. Appl. Res. Mem. Cogn.</italic></source> <volume>4</volume> <fpage>15</fpage>&#x2013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1016/j.jarmac.2014.07.005</pub-id></citation></ref>
<ref id="B78"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Walzenbach</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Hiding sensitive topics by design?: an experiment on the reduction of social desirability bias in factorial surveys.</article-title> <source><italic>Survey Res. Methods</italic></source> <volume>13</volume> <fpage>103</fpage>&#x2013;<lpage>121</lpage>. <pub-id pub-id-type="doi">10.18148/srm/2019.v1i1.7243</pub-id></citation></ref>
<ref id="B79"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>J. T.</given-names></name> <name><surname>Spezio</surname> <given-names>M.</given-names></name> <name><surname>Camerer</surname> <given-names>C. F.</given-names></name></person-group> (<year>2010</year>). <article-title>Pinocchio&#x2019;s pupil: using eyetracking and pupil dilation to understand truth telling and deception in sender-receiver games.</article-title> <source><italic>Am. Econ. Rev.</italic></source> <volume>100</volume> <fpage>984</fpage>&#x2013;<lpage>1007</lpage>. <pub-id pub-id-type="doi">10.1257/aer.100.3.984</pub-id></citation></ref>
<ref id="B80"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Q.</given-names></name> <name><surname>Yang</surname> <given-names>S.</given-names></name> <name><surname>Liu</surname> <given-names>M.</given-names></name> <name><surname>Cao</surname> <given-names>Z.</given-names></name> <name><surname>Ma</surname> <given-names>Q.</given-names></name></person-group> (<year>2014</year>). <article-title>An eye-tracking study of website complexity from cognitive load perspective.</article-title> <source><italic>Decision Supp. Syst.</italic></source> <volume>62</volume> <fpage>1</fpage>&#x2013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1016/j.dss.2014.02.007</pub-id></citation></ref>
<ref id="B81"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Webb</surname> <given-names>A. K.</given-names></name> <name><surname>Hacker</surname> <given-names>D. J.</given-names></name> <name><surname>Osher</surname> <given-names>D.</given-names></name> <name><surname>Cook</surname> <given-names>A. E.</given-names></name> <name><surname>Woltz</surname> <given-names>D. J.</given-names></name> <name><surname>Kristjansson</surname> <given-names>S.</given-names></name><etal/></person-group> (<year>2009</year>). &#x201C;<article-title>Eye movements and pupil size reveal deception in computer administered questionnaires</article-title>,&#x201D; in <source><italic>Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience Lecture Notes in Computer Science</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Schmorrow</surname> <given-names>D. D.</given-names></name> <name><surname>Estabrooke</surname> <given-names>I. V.</given-names></name> <name><surname>Grootjen</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>553</fpage>&#x2013;<lpage>562</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-642-02812-0_64</pub-id></citation></ref>
<ref id="B82"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wei</surname> <given-names>M.</given-names></name> <name><surname>Zhou</surname> <given-names>Y.</given-names></name> <name><surname>Barber</surname> <given-names>C.</given-names></name> <name><surname>den Brok</surname> <given-names>P.</given-names></name></person-group> (<year>2015</year>). <article-title>Chinese students&#x2019; perceptions of teacher&#x2013;student interpersonal behavior and implications.</article-title> <source><italic>System</italic></source> <volume>55</volume> <fpage>134</fpage>&#x2013;<lpage>144</lpage>. <pub-id pub-id-type="doi">10.1016/j.system.2015.09.007</pub-id></citation></ref>
<ref id="B83"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wright</surname> <given-names>G.</given-names></name> <name><surname>Berry</surname> <given-names>C.</given-names></name> <name><surname>Bird</surname> <given-names>G.</given-names></name></person-group> (<year>2013</year>). <article-title>Deceptively simple &#x2026;The &#x201C;deception-general&#x201D; ability and the need to put the liar under the spotlight.</article-title> <source><italic>Front. Neurosci.</italic></source> <volume>7</volume>:<issue>152</issue>. <pub-id pub-id-type="doi">10.3389/fnins.2013.00152</pub-id> <pub-id pub-id-type="pmid">24009549</pub-id></citation></ref>
<ref id="B84"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ye</surname> <given-names>C.</given-names></name> <name><surname>Xiong</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>The influences of product similarity on consumer preferences: a study based on eye-tracking analysis.</article-title> <source><italic>Cogn. Tech. Work</italic></source> <volume>22</volume> <fpage>603</fpage>&#x2013;<lpage>613</lpage>. <pub-id pub-id-type="doi">10.1007/s10111-019-00584-1</pub-id></citation></ref>
<ref id="B85"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yin</surname> <given-names>L.</given-names></name> <name><surname>Reuter</surname> <given-names>M.</given-names></name> <name><surname>Weber</surname> <given-names>B.</given-names></name></person-group> (<year>2016</year>). <article-title>Let the man choose what to do: neural correlates of spontaneous lying and truth-telling.</article-title> <source><italic>Brain Cogn.</italic></source> <volume>102</volume> <fpage>13</fpage>&#x2013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1016/j.bandc.2015.11.007</pub-id> <pub-id pub-id-type="pmid">26685089</pub-id></citation></ref>
<ref id="B86"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zagermann</surname> <given-names>J.</given-names></name> <name><surname>Pfeil</surname> <given-names>U.</given-names></name> <name><surname>Reiterer</surname> <given-names>H.</given-names></name></person-group> (<year>2016</year>). &#x201C;<article-title>Measuring cognitive load using eye tracking technology in visual computing</article-title>,&#x201D; in <source><italic>Proceedings of the Beyond Time and Errors on Novel Evaluation Methods for Visualization - BELIV &#x2019;16</italic></source>, (<publisher-loc>Baltimore, MD</publisher-loc>: <publisher-name>ACM Press</publisher-name>), <fpage>78</fpage>&#x2013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1145/2993901.2993908</pub-id></citation></ref>
<ref id="B87"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>W.-L.</given-names></name> <name><surname>Dong</surname> <given-names>B.-N.</given-names></name> <name><surname>Lu</surname> <given-names>B.-L.</given-names></name></person-group> (<year>2014</year>). &#x201C;<article-title>Multimodal emotion recognition using EEG and eye tracking data</article-title>,&#x201D; in <source><italic>Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society</italic></source>, (<publisher-loc>Chicago, IL</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>5040</fpage>&#x2013;<lpage>5043</lpage>. <pub-id pub-id-type="doi">10.1109/EMBC.2014.6944757</pub-id> <pub-id pub-id-type="pmid">25571125</pub-id></citation></ref>
<ref id="B88"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zi-Han</surname> <given-names>W. E. I.</given-names></name> <name><surname>Xingshan</surname> <given-names>L. I.</given-names></name></person-group> (<year>2015</year>). <article-title>Decision process tracing: evidence from eye-movement data.</article-title> <source><italic>Adv. Psychol. Sci.</italic></source> <volume>23</volume>:<issue>2029</issue>. <pub-id pub-id-type="doi">10.3724/SP.J.1042.2015.02029</pub-id></citation></ref>
<ref id="B89"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zuckerman</surname> <given-names>M.</given-names></name> <name><surname>DePaulo</surname> <given-names>B. M.</given-names></name> <name><surname>Rosenthal</surname> <given-names>R.</given-names></name></person-group> (<year>1981</year>). <article-title>Verbal and nonverbal communication of deception.</article-title> <source><italic>Adv. Exp. Soc. Psychol.</italic></source> <volume>14</volume> <fpage>1</fpage>&#x2013;<lpage>59</lpage>. <pub-id pub-id-type="doi">10.1016/S0065-2601(08)60369-X</pub-id></citation></ref>
</ref-list></back>
</article>