<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2021.707809</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The Effectiveness of Facial Expression Recognition in Detecting Emotional Responses to Sound Interventions in Older Adults With Dementia</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Liu</surname> <given-names>Ying</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Wang</surname> <given-names>Zixuan</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/1318969/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Yu</surname> <given-names>Ge</given-names></name>
</contrib>
</contrib-group>
<aff><institution>Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology</institution>, <addr-line>Harbin</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Pyoung Jik Lee, University of Liverpool, United Kingdom</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Jooyoung Hong, Chungnam National University, South Korea; Jin Yong Jeon, Hanyang University, South Korea</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Ying Liu <email>liuying01&#x00040;hit.edu.cn</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Environmental Psychology, a section of the journal Frontiers in Psychology</p></fn></author-notes>
<pub-date pub-type="epub">
<day>25</day>
<month>08</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>12</volume>
<elocation-id>707809</elocation-id>
<history>
<date date-type="received">
<day>10</day>
<month>05</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>16</day>
<month>07</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2021 Liu, Wang and Yu.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Liu, Wang and Yu</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>This research uses facial expression recognition software (FaceReader) to explore the influence of different sound interventions on the emotions of older people with dementia. The field experiment was carried out in the public activity space of an older adult care facility. Three intervention sound sources were used, namely, music, stream, and birdsong. Data collected through the Self-Assessment Manikin Scale (SAM) were compared with facial expression recognition (FER) data. FaceReader identified differences in the emotional responses of older people with dementia to different sound interventions and revealed changes in facial expressions over time. The facial expression of the participants had significantly higher valence for all three sound interventions than in the intervention without sound (p &#x0003C; 0.01). The indices of sadness, fear, and disgust differed significantly between the different sound interventions. For example, before the start of the birdsong intervention, the disgust index initially increased by 0.06 from 0 s to about 20 s, followed by a linear downward trend, with an average reduction of 0.03 per 20 s. In addition, valence and arousal were significantly lower when the sound intervention began before, rather than concurrently with, the start of the activity (<italic>p</italic> &#x0003C; 0.01). Moreover, in the birdsong and stream interventions, there were significant differences between intervention days (<italic>p</italic> &#x0003C; 0.05 or <italic>p</italic> &#x0003C; 0.01). Furthermore, facial expression valence significantly differed by age and gender. Finally, a comparison of the SAM and FER results showed that, in the music intervention, the valence in the first 80 s helps to predict dominance (<italic>r</italic> = 0.600) and acoustic comfort (<italic>r</italic> = 0.545); in the stream sound intervention, the first 40 s helps to predict pleasure (<italic>r</italic> = 0.770) and acoustic comfort (<italic>r</italic> = 0.766); for the birdsong intervention, the first 20 s helps to predict dominance (<italic>r</italic> = 0.824) and arousal (<italic>r</italic> = 0.891).</p></abstract>
<kwd-group>
<kwd>facial expression recognition</kwd>
<kwd>sound intervention</kwd>
<kwd>emotion</kwd>
<kwd>type of sound source</kwd>
<kwd>elderly with dementia</kwd>
</kwd-group>
<counts>
<fig-count count="11"/>
<table-count count="5"/>
<equation-count count="0"/>
<ref-count count="68"/>
<page-count count="16"/>
<word-count count="9596"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>Introduction</title>
<p>Dementia is a set of syndromes characterized by memory and cognitive impairment caused by brain diseases. China currently has the largest population of older people with dementiain the world, &#x0007E;14 million (Jia et al., <xref ref-type="bibr" rid="B26">2020</xref>). The decline in cognitive function causes older adults with dementia to gradually lose the ability and opportunities to engage in various activities, and scarce activity can easily induce depression and agitation behavior (Mohler et al., <xref ref-type="bibr" rid="B42">2018</xref>). Lack of external stimuli is a prominent cause of negative emotions in older people with dementia. Studies have found that sensory stimulation through acoustic intervention can reduce the agitation behavior of older people with dementia (Riley-Doucet and Dunn, <xref ref-type="bibr" rid="B48">2013</xref>; Nishiura et al., <xref ref-type="bibr" rid="B44">2018</xref>; Syed et al., <xref ref-type="bibr" rid="B55">2020</xref>). Therefore, how to create a healthy acoustic environment for the older with dementia has become an urgent problem for countries worldwide to be solved.</p>
<p>Emotions can be perceived and evaluated as their status changes with change in the person&#x02013;environment relationship (Rolls, <xref ref-type="bibr" rid="B49">2019</xref>). Despite their cognitive impairment, older adults with dementia continue to display emotions, and their internal emotional processing may be intact or partially retained (Satler et al., <xref ref-type="bibr" rid="B51">2010</xref>); specifically, they retain the feeling and acquisition of emotions (Blessing et al., <xref ref-type="bibr" rid="B7">2006</xref>). In addition, the emotions reflected by facial expressions are similar between older adults with mild dementia and typical older adults (Smith, <xref ref-type="bibr" rid="B53">1995</xref>). Along with the decline in cognitive function, older adults with dementia can experience various emotional problems, such as anxiety, depression, and excitement. At present, there are no effective treatment methods and drugs for dementia. Therefore, effective emotional intervention is especially important to suppress negative emotions and generate positive emotions (Marquardt et al., <xref ref-type="bibr" rid="B38">2014</xref>). Common methods include environmental intervention, behavioral intervention, psychological intervention, social therapy, and entertainment therapy (Howe, <xref ref-type="bibr" rid="B24">2014</xref>).</p>
<p>Some previous studies have shown that environmental interventions can play a therapeutic role for older adults with dementia (Satariano, <xref ref-type="bibr" rid="B50">2006</xref>). In this regard, the acoustic environment is important, and appropriate sound interventions can help delay the onset of dementia (Wong et al., <xref ref-type="bibr" rid="B58">2014</xref>). Music has been widely used in treating dementia during the past decade, and remarkable results have been achieved with respect to memory and mood disorders (Ailun and Zhemin, <xref ref-type="bibr" rid="B1">2018</xref>; Fraile et al., <xref ref-type="bibr" rid="B16">2019</xref>). Music can reduce depression (Li et al., <xref ref-type="bibr" rid="B32">2019</xref>) and improve behavioral disorders, anxiety, and restlessness in older people with dementia (Gomez-Romero et al., <xref ref-type="bibr" rid="B19">2017</xref>). In addition, some studies have investigated how best to design the acoustic environment for older adults with dementia based on the phenomenon of auditory masking (Hong et al., <xref ref-type="bibr" rid="B23">2020</xref>). For example, adding white noise to the environment may mitigate some auditory hallucinations, helping older adults with dementia to temporarily relax. White noise can also reduce the mental and behavioral symptoms of older adults with dementia (Kaneko et al., <xref ref-type="bibr" rid="B27">2013</xref>). Conversely, some studies have revealed a negative impact of noise on the quality of life of older people with dementia. For example, some studies showed that high noise can lower the social mood of elderly with dementia and induce falling behaviors (Garre-Olmo et al., <xref ref-type="bibr" rid="B18">2012</xref>; Jensen and Padilla, <xref ref-type="bibr" rid="B25">2017</xref>). When the daytime noise level is continuously higher than 55 dBA, it can induce emotional and behavioral agitation in older adults with dementia (Harding et al., <xref ref-type="bibr" rid="B21">2013</xref>). The current development trend of the acoustic environment is changing from noise control to soundscape creation, that is, from reducing negative health effects to promoting positive health trends (Kang et al., <xref ref-type="bibr" rid="B28">2020</xref>). However, research on how the acoustic environment can promote the health of older adults with dementia has so far only focused on music and noise. Whether other types of sound interventions, such as birdsong and stream sound, improve mood and health in older people with dementia have not been examined. In addition, some studies have proved that the playing time of a sound source is also an important factor affecting the perception of sound in people (Staats and Hartig, <xref ref-type="bibr" rid="B54">2004</xref>; Korpela et al., <xref ref-type="bibr" rid="B29">2008</xref>). However, there is no relevant research on whether the time of intervention of the sound source will affect emotions.</p>
<p>Prior research on emotions has mainly been conducted at the three levels, namely, physiology, cognition, and behavior. Different research levels correspond to different research contents and methods (Zhaolan, <xref ref-type="bibr" rid="B65">2005</xref>). However, wearing physiological measuring devices may induce negative emotions in older people with dementia, who are more prone to mood swings.</p>
<p>Most common in emotion research is the study of cognitive theory, which posits that a stimulus can only produce a specific emotion after the cognitive response of the subject (Danling, <xref ref-type="bibr" rid="B14">2001</xref>). The main method adopted is a subjective questionnaire. For example, Meng et al. (<xref ref-type="bibr" rid="B41">2020b</xref>) studied the influence of music on communication emotions through a field questionnaire, which asked participants to evaluate their emotional state. Zhihui (<xref ref-type="bibr" rid="B66">2015</xref>) and Xie et al. (<xref ref-type="bibr" rid="B59">2020</xref>) conducted field experiments in train stations and hospital nursing units, asking participants how various types of sound sources affect their emotions. However, surveys have several limitations. First, the questionnaire is subjective, and an &#x0201C;experimenter effect&#x0201D; might occur if the questionnaire is not well-designed (Brown et al., <xref ref-type="bibr" rid="B10">2011</xref>). Second, a single-wave survey cannot show trends over time in how participants react to a sound intervention, which precludes calculating the role of time in the intervention process.</p>
<p>The third main research avenue is the study of behavioral emotions. Behaviorists believe that external behaviors caused by emotions can reflect the true inner feeling of a person (Yanna, <xref ref-type="bibr" rid="B61">2014</xref>). The main method is to measure emotional changes in people through facial, verbal, and bodily expressions. Psychologists generally believe that expressions are a quantitative form of changes in emotions. As a tool for evaluating emotions, the software FaceReader, based on facial expression recognition (FER), has been applied in psychological evaluation (Bartlett et al., <xref ref-type="bibr" rid="B6">2005</xref>; Amor et al., <xref ref-type="bibr" rid="B3">2014</xref>; Zarbakhsh and Demirel, <xref ref-type="bibr" rid="B64">2018</xref>). The effectiveness of FER has been proven in many previous studies, and it can measure emotions with more than 87% efficacy (Terzis et al., <xref ref-type="bibr" rid="B56">2010</xref>). The validity of FaceReader for East Asian people, in particular, has been shown to be 71% (Axelsson et al., <xref ref-type="bibr" rid="B4">2010</xref>; Yang and Hongding, <xref ref-type="bibr" rid="B60">2015</xref>). The efficiency of this method has been tested in many research fields. For example, Hadinejad et al. (<xref ref-type="bibr" rid="B20">2019</xref>) proved that, when participants watched travel advertisements, arousal and positive emotions diminished. Leitch et al. (<xref ref-type="bibr" rid="B30">2015</xref>) found that the length of time after tasting sweeteners affected the potency and arousal of facial expressions. In addition, Meng et al. (<xref ref-type="bibr" rid="B40">2020a</xref>) conducted laboratory experiments to test the effectiveness of facial expressions for detecting sound perception and reported that the type of sound source had a significant impact on the valence and indicators of facial expressions. FER has also been used in research on the health of older adults with dementia. Re (<xref ref-type="bibr" rid="B47">2003</xref>) used a facial expression system to analyze the facial expression patterns and facial movement patterns of older people with severe dementia. Lints-Martindale et al. (<xref ref-type="bibr" rid="B34">2007</xref>) measured the degree of pain of older adults with dementia through a facial expression system. However, no study has tested whether FER can be used to investigate the effect of the acoustic environment on the emotions of elderly with dementia. In addition, the characteristics of the normal population, such as their gender, age, and so on, are related to their emotions (Ma et al., <xref ref-type="bibr" rid="B37">2017</xref>; Yi and Kang, <xref ref-type="bibr" rid="B62">2019</xref>). However, in older adults with dementia, it is not clear whether these characteristics affect the results of facial expression.</p>
<p>To address this gap in the literature, this study explored the effectiveness of FER in measuring how sound interventions affect the emotions of elderly with dementia. Specifically, this study is focused on the following research questions: (1) Can facial expression analysis systems be used to study sound interventions on the emotions of older people with dementia?; (2) How do different types of sound interventions affect the valence and other indicators of facial expressions of elderly with dementia?; (3) Do demographic and time factors, such as age, gender, Mini-Mental State Examination (MMSE) scores, intervention duration, and intervention days, cause different degrees of impact? A field experiment was conducted to collect facial expression data of 35 elderly with dementia in an older adult care facility in Changchun, China. The experiment included three sound sources typically preferred by elderly with dementia: music, stream, and birdsong.</p>
</sec>
<sec sec-type="materials and methods" id="s2">
<title>Materials and Methods</title>
<sec>
<title>Participants</title>
<p>The participants in this study are older people with dementia residing at seven institutes in Changchun, China. A total of 35 older people with mild dementia was selected, comprising 16 men and 19 women aged 60&#x02013;90 years (mean = 81, <italic>SD</italic> = 7). The number of participants was determined based on similar related experiments (El Haj et al., <xref ref-type="bibr" rid="B15">2015</xref>; Cuddy et al., <xref ref-type="bibr" rid="B12">2017</xref>).</p>
<p>The following selection criteria were applied. First, participants had to be at least 60 years old. Second, participants had to score 21&#x02013;27 on the MMSE, indicating mild cognitive impairment or dementia. Third, participants had to be able to communicate through normal conversation and have normal hearing. Fourth, participants were required to have &#x0003C;5 years of music training to ensure that the music intervention induced cognitive emotions rather than memory emotions (Cuddy et al., <xref ref-type="bibr" rid="B13">2015</xref>). Fifth, any individuals with obvious symptoms of anxiety or depression were excluded. Sixth, participants were required to refrain from smoking or drinking alcohol, coffee, or other beverages that stimulate the sympathetic nervous system during the 6 h before the test (Li and Kang, <xref ref-type="bibr" rid="B33">2019</xref>). Finally, written informed consent was obtained from all participants before the test began.</p>
</sec>
<sec>
<title>Activity</title>
<p>To select the type of activity that would best facilitate the sound intervention experiment, we visited seven elderly care facilities in northern China to select older people with dementia. Through observation, we identified that older peoplewith dementia participated in painting, origami, singing, gardening, finger exercises, Tai Chi, ball sports, card games, watching TV, and walking. Finger exercises were selected as the activity for the experiment in this study for four reasons: First, of the abovementioned activities, finger exercises were the most actively participated activity by older people with dementia in the seven elderly care facilities. Second, they are convenient for capturing facial expressions because participants are seated during the exercise, facing forward, and body movements are relatively less. Third, for the collective activity of finger exercises, the error caused by the number of experiments can be reduced. Finally, the finger exercise itself does not produce noise, so it will not interfere with the sound intervention activity.</p>
</sec>
<sec>
<title>Experiment Site</title>
<p>Emotion experiments are usually carried out in the field or a laboratory. Field experiments are conducted in a naturally occurring environment, with high reliability and authenticity (Harrison, <xref ref-type="bibr" rid="B22">2016</xref>). A key consideration in this study is that elderly with dementia are particularly sensitive to unfamiliar environments. Thus, to ensure that the participants were as comfortable as possible and thereby to improve the reliability and validity of the results, it was necessary to implement the intervention in a place familiar to them (El Haj et al., <xref ref-type="bibr" rid="B15">2015</xref>). After considering the sensitivity of participants and the collective nature of the finger exercise activity, we decided to conduct a field experiment and hence selected the public activity space of an institute in Changchun, China, as the experiment site.</p>
</sec>
<sec>
<title>Sound Source</title>
<p>Some previous studies have proved that the following six types of sound sources may help to improve mood, namely, music, birdsong, fountain, stream, wind/rain, and wind/leaves (Zhongzhe, <xref ref-type="bibr" rid="B67">2016</xref>; Hong et al., <xref ref-type="bibr" rid="B23">2020</xref>). Birdsong was mainly concentrated in the high-frequency region and other sound sources are mainly concentrated in low frequencies, while the sound of the music had an obvious rhythm. An external speaker was used for the output of the sound source for the experiment, as prolonged use of a headset would cause the participants to become uncomfortable and would interfere with the experimental results. As it is difficult to distinguish between the emotions induced by the music and the lyrics of songs, instrumental music is more suitable for use in such an experiment (Cuddy et al., <xref ref-type="bibr" rid="B13">2015</xref>). Therefore, we selected a piano performance of &#x0201C;Red River Valley&#x0201D; as the music intervention stimulus: the song was included in the Chinese Academy Award film with the same name, released in 1996. The film <italic>Red River Valley</italic> shows the heroic and unyielding national spirit of the Chinese people and is well-known among older adult participants. A previous study on music therapy showed that this song has the effect of regulating emotions (Shuping et al., <xref ref-type="bibr" rid="B52">2019</xref>).</p>
<p>To deepen the understanding of sound source preferences by participants while also considering the impact of sound interventions on the work of care staff, a survey was conducted. Across the seven elderly care facilities, a total of 73 older people with dementia (35 men, 38 women; mean age = 79, SD = 9) were surveyed on their sound source preferences. The 1-min equivalent sound pressure level (SPL) was adjusted to 55 dB(A) for each audio frequency by AuditionCS6 to remove differences in volume during the stimulation of the four sounds and to ensure that the participants listened to the four auditory stimulus sounds under similar playback SPL conditions. The background noise was below 45 dB(A) during the survey (Zhou et al., <xref ref-type="bibr" rid="B68">2020</xref>). The selected retirement facilities met the following two criteria: (1) providing sufficient daily activities and being fully equipped to ensure that the conditions in which older adults reside would not affect their evaluation of the sound sources and (2) having 10 or more residents, allowing efficient distribution of the questionnaire and increasing the statistical reliability of collected data. Each sound source was played in a loop for 1 min. At the end of one sound source, participants had 10 s to conduct a sound preference questionnaire for the sound source. We used a Likert scale in the sound preference questionnaire, as its structural simplicity and relative clarity make it particularly suitable for completion by elderly with dementia. The questionnaire design is outlined in <xref ref-type="table" rid="T1">Table 1</xref>. We also surveyed 23 care partners (mean age = 36, <italic>SD</italic> = 12; 6 males, 17 females) of older people with dementia to collect their insights on the extent to which each sound source affects their work. The statistics are shown in <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Contents of the sound preference questionnaire for older adults with dementia.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left" colspan="2"><bold>Sound preference questionnaire</bold></th>
<th valign="top" align="left"><bold>Description</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" colspan="2">Demographic information</td>
<td valign="top" align="left">Gender, age</td>
</tr>
<tr>
<td valign="top" align="left">Sound source type</td>
<td valign="top" align="left">Music</td>
<td valign="top" align="left">1 = Extremely dislike</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Birdsong</td>
<td valign="top" align="left">2 = Slightly dislike</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Fountain</td>
<td valign="top" align="left">3 = Don&#x00027;t care</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Stream</td>
<td valign="top" align="left">4 = Slightly like</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Wind and rain</td>
<td valign="top" align="left">5 = Extremely like</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Wind blowing leaves</td>
<td/>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>(A)</bold> Sound type preferences of older adults with dementia. <bold>(B)</bold> Evaluation of the effect of each sound type on care partners on activity engagement; M, Music; B, Birdsong; F, Fountain; S, Stream; WR, wind and rain; and WL, wind blowing leaves.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0001.tif"/>
</fig>
<p>A one-sample <italic>t</italic>-test with 3 (meaning do not care in the questionnaire) as the test value was performed on the preference scores for different types of sound sources. As shown in <xref ref-type="table" rid="T2">Table 2</xref>, elderly with dementia liked music (<italic>p</italic> = 0.001, <italic>t</italic> = 7.56), birdsong (<italic>p</italic> = 0.018, <italic>t</italic> = 2.42), and the sound of a stream (<italic>p</italic> = 0.001, <italic>t</italic> = 3.34) but disliked the sound of wind and rain (<italic>p</italic> = 0.001, <italic>t</italic> = &#x02212;5.36) and of wind blowing leaves (<italic>p</italic> = 0.03, <italic>t</italic> = &#x02212;2.21); their evaluation of the fountain sound was neutral (<italic>p</italic> = 0.724, t = 0.35). We also performed a one-sample <italic>t</italic>-test on the degree to which each sound source affected the work of care partners. The results show that care partners believed music (<italic>p</italic> = 0.001, <italic>t</italic> = 4.04) and the sound of a stream (<italic>p</italic> = 0.043, <italic>t</italic> = 2.15) would promote their work; the sound of wind and rain (<italic>p</italic> = 0.009, <italic>t</italic> = &#x02212;2.86) would disturb the work; but birdsong (<italic>p</italic> = 0.788, <italic>t</italic> = 0.27), the sound of a fountain (<italic>p</italic> = 0.497, <italic>t</italic> = 0.72), and the sound of wind blowing leaves (<italic>p</italic> = 0.418, <italic>t</italic> = &#x02212;0.83) would have no effect. Based on these findings, we selected music, birdsong, and the sound of a stream to be the sound sources for the interventions in our field experiment as sounds preferred by older adults and that will not disturb the work of care partners.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Comparison of the degree of preference of older adults with dementia on different types of sound sources.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th/>
<th valign="top" align="center"><bold>M</bold></th>
<th valign="top" align="center"><bold>B</bold></th>
<th valign="top" align="center"><bold>F</bold></th>
<th valign="top" align="center"><bold>S</bold></th>
<th valign="top" align="center"><bold>WR</bold></th>
<th valign="top" align="center"><bold>WL</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Score</td>
<td valign="top" align="center">4.00 &#x000B1; 1.13</td>
<td valign="top" align="center">3.34 &#x000B1; 1.20</td>
<td valign="top" align="center">3.04 &#x000B1; 0.99</td>
<td valign="top" align="center">3.38 &#x000B1; 0.98</td>
<td valign="top" align="center">2.32 &#x000B1; 1.09</td>
<td valign="top" align="center">2.73 &#x000B1; 1.06</td>
</tr>
<tr>
<td valign="top" align="left"><italic>t</italic>-value</td>
<td valign="top" align="center">7.56<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">2.42<xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.35</td>
<td valign="top" align="center">3.34<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;5.36<xref ref-type="table-fn" rid="TN1"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;2.21<xref ref-type="table-fn" rid="TN2"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left"><italic>p</italic>-value</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">0.018</td>
<td valign="top" align="center">0.724</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">0.030</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="TN1"><label>&#x0002A;&#x0002A;</label><p><italic>p &#x0003C; 0.01</italic>,</p></fn>
<fn id="TN2"><label>&#x0002A;</label><p><italic>p &#x0003C; 0.05; M, music; B, birdsong; F, fountain; S, stream; WR, wind and rain; and WL, wind blowing leaves</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>To avoid any change in sound during the activity that could shape the activity effect of participants, the sound intervention time was set to match the finger exercise time (4 min 20 s), and the SPL was set to 60 dBA (El Haj et al., <xref ref-type="bibr" rid="B15">2015</xref>). We recognized that, if the experiment time was too long, participants would be distracted, thus harming the accuracy of collected data. To determine a suitable analysis time, a pilot study was conducted, setting the FER sampling rate at 15/s and measuring arousal, which ranges between 0 (inactive) and 1 (active). <xref ref-type="fig" rid="F2">Figure 2</xref> shows changes of arousal in the first 120 s, with the absolute value of arousal determined every 20 s for each of the three sound sources. The trends are similar: during 0&#x02013;20 s and 20&#x02013;40 s, the arousal was the largest; the subsequent arousal decreased significantly, and then remained relatively stable until the end of the recording. However, in the trials with the sound of a stream and with birdsong, arousal rose again after 80 s, which may be due to distraction among the participants. The result is consistent with previous research findings (Meng et al., <xref ref-type="bibr" rid="B40">2020a</xref>). Accordingly, we chose the first 80 s as the duration for analysis in our experiment.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Pilot study results for arousal by the different sound interventions. <bold>(A)</bold> Music, <bold>(B)</bold> stream, and <bold>(C)</bold> birdsong.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0002.tif"/>
</fig>
</sec>
<sec>
<title>Emotional Evaluation Scale Design</title>
<p>To enable comparison with the data collected using FaceReader, we divided the subjective evaluation scale into three parts. The first part is the emotion scale, for which we selected the SAM&#x02014;a nonverbal tool for self-assessment of emotions devised by Bradley and Lang (<xref ref-type="bibr" rid="B9">1994</xref>). The SAM can be used for people with different cognitive levels and with different cultural backgrounds, including children and adults (Ying et al., <xref ref-type="bibr" rid="B63">2008</xref>; Peixia et al., <xref ref-type="bibr" rid="B45">2010</xref>), and is simple and easy to operate. It includes three dimensions: arousal, pleasure, and dominance. Each dimension has five images depicting different levels, each with an associated point between the two pictures. The SAM can quickly quantify the emotional state of the subject on the three dimensions without the need for them to verbalize emotions. Backs et al. (<xref ref-type="bibr" rid="B5">2005</xref>) confirmed that the three dimensions of the SAM have high internal consistency. The SAM has been successfully applied in studies of people with dementia, especially those with mild-to-moderate dementia, including memory impairment. It can be used to objectively express the subjective emotional experience of dementia (Blessing et al., <xref ref-type="bibr" rid="B8">2010</xref>; Lixiu and Hong, <xref ref-type="bibr" rid="B36">2016</xref>). In the second part of the subjective evaluation scale, we included a question asking participants to indicate their acoustic comfort with the sound source (see <xref ref-type="table" rid="T3">Table 3</xref>). The third part of the survey collects demographics, including the age of the participant, gender, MMSE score, and other information. These data were obtained by asking the care partners or checking the medical records of the participants.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Contents of the emotional evaluation scale.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left" colspan="2"><bold>Subjective evaluation</bold></th>
<th valign="top" align="left"><bold>Range</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left" colspan="2">Acoustic comfort</td>
<td valign="top" align="left"><inline-graphic xlink:href="fpsyg-12-707809-i0001.tif"/>Very uncomfortable (1) to very comfortable (5) <inline-graphic xlink:href="fpsyg-12-707809-i0002.tif"/></td>
</tr>
<tr>
<td valign="top" align="left">Emotion</td>
<td valign="top" align="left">Pleasure</td>
<td valign="top" align="left"><inline-graphic xlink:href="fpsyg-12-707809-i0003.tif"/>Very unpleasant (1) to very pleasant (9)<inline-graphic xlink:href="fpsyg-12-707809-i0004.tif"/></td>
</tr>
<tr>
<td valign="top" align="left">dimension</td>
<td valign="top" align="left">Arousal</td>
<td valign="top" align="left"><inline-graphic xlink:href="fpsyg-12-707809-i0005.tif"/>Very sleepy (1) to very excited (9)<inline-graphic xlink:href="fpsyg-12-707809-i0006.tif"/></td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Dominance</td>
<td valign="top" align="left"><inline-graphic xlink:href="fpsyg-12-707809-i0007.tif"/>Very passive (1) to very proactive (9)<inline-graphic xlink:href="fpsyg-12-707809-i0008.tif"/></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Facial Expression Recognition</title>
<p>FaceReader recognizes facial expressions through a three-step process. The first step is detecting the face (Viola and Jones, <xref ref-type="bibr" rid="B57">2001</xref>). The second step is accurate 3D modeling of the face using an algorithmic approach based on the Active Appearance Method (AAM) (Cootes and Taylor, <xref ref-type="bibr" rid="B11">2000</xref>). In the last step, facial expressions are classified by training an artificial neural network: the AAM is used to compute scores of the probability and intensity of six facial expressions (happiness, surprise, fear, sadness, anger, and disgust) on a continuous scale from 0 (absent) to 1 (fully present) (Lewinski et al., <xref ref-type="bibr" rid="B31">2014</xref>). FaceReader also calculates the valence and arousal of facial expressions. Valence refers to the emotional state of the participant, whether positive (from 0 to 1) or negative (from &#x02212;1 to 0), while arousal indicates whether the test subject is active or not (from 0 to 1) (Frijda, <xref ref-type="bibr" rid="B17">1986</xref>).</p>
<p>FaceReader inputs can be pictures or videos of a human face, and the software supports offline video input. In comparison with the pictures, videos enable more data to be generated, and the output data can be connected to reveal changes in trends over time. Therefore, we selected videos as the input in our experiment. In the video-recording process, the subject must always face the camera, and only a small angle of rotation is allowed. Older people with dementia can fully meet these requirements when performing finger exercises. The number of FaceReader online recording devices is limited. Therefore, we recorded offline videos of the facial expression of the subject.</p>
<p>The experiment site selected was the indoor public activity space of an elderly care facility in Changchun (15.5 &#x000D7; 16.5 &#x000D7; 2.8 m). <xref ref-type="fig" rid="F3">Figure 3</xref> shows the layout of the room, delineating the main experiment site within the dotted frame, where participants performed the finger exercises. The site was equipped with seven chairs, three tables, video equipment, and a sound source. The video equipment was an iPhone, placed 0.5 m from each elderly person and 0.5 &#x02212;1.5 m from the sound source. Because the mobile phone can meet the video pixel requirements of FER software and its size is small, it is convenient to use it with a bracket fixed to the table, which will not make older adults fearful. The care partner was positioned at 2 m from the participant to offer guidance. Throughout the experiment, the doors and windows of the room were closed. To ensure that neither the indoor temperature nor the level of illumination affected the mood and performance of the participant (Altomonte et al., <xref ref-type="bibr" rid="B2">2017</xref>; Petersen and Knudsen, <xref ref-type="bibr" rid="B46">2017</xref>), we ran the experiment from 10:00 to 11:00 in the morning and maintained the temperature at 23 to 25&#x000B0;C (Nematchoua et al., <xref ref-type="bibr" rid="B43">2019</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Layout of the experimental site.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0003.tif"/>
</fig>
<p>In the pilot study, it was found that the intervention effect disappeared 1 week after stopping the intervention (section Data Analysis). Therefore, the sequencing effect of the stimuli can be ignored. Thirty-five participants were arranged to repeat each set of experiments. To avoid distraction by including too many people during exercise, a total of 35 participants were randomly allocated into groups of 7 for an exercise before each experiment. After the first group was seated, the designated sound source was played and the care partner guided participants in performing finger exercises for 4 min and 20 s. It is worth noting that, in no sound group experiment, the speaker was turned off. Subsequently, the emotional evaluation scale was issued for completion by the participants, with assistance from care partners where necessary, within a 5 min window. In turn, other groups undertook the same process to complete one experiment. The experiment was repeated for 5 days under the same sound source. The experiment interval of different groups was 1 week (Meilan Garcia et al., <xref ref-type="bibr" rid="B39">2012</xref>). The flow of the experiment is shown in <xref ref-type="fig" rid="F4">Figure 4</xref>.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Experimental procedure steps.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0004.tif"/>
</fig>
</sec>
<sec>
<title>Data Analysis</title>
<p>Statistical Product and Service Solutions (SPSS 23.0) was used to analyze the survey data. Data with a large degree of dispersion were removed. In the pilot study, we performed an independent-samples <italic>t</italic>-test between valence data before and 1 week after the first intervention (before the second intervention). The pre-intervention valence (&#x02212;0.173 &#x000B1; 0.086) and the 1-week-post-intervention valence (&#x02212;0.141 &#x000B1; 0.096) did not significantly differ (<italic>p</italic> = 0.297). This indicates that, although the experiments were performed by the same groups of participants, the intervention effect disappeared within a week, meaning that the groups can be considered independent in each experiment. Therefore, a one-way ANOVA was used to record the differences in valence from the results of interventions with different sound sources. Linear, quadratic, and cubic regression analyses were used to analyze the changes in valence and facial-expression indicators over time. Then, the repeated measurement method was used to test the changes in potency on different days of the experiment. We also used Pearson&#x00027;s correlations to calculate the relationship between the results from FaceReader and the results from the emotional evaluation scale and to identify individual differences. Effect sizes were also reported using an effect size calculator, represented by the sign r (Lipsey and Wilson, <xref ref-type="bibr" rid="B35">2000</xref>). TheA point-biserial correlation was used to determine the relationship between gender and test results.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec>
<title>The Effects of Sound Interventions on the Facial Expressions of Older Adults With Dementia</title>
<p>FaceReader calculates the valence for the facial expression in each frame. Each individual has different initial values for facial expressions in their natural state. Therefore, experiments with no sound intervention were performed to provide initial data. The average valence after 20, 40, 60, and 80 s with no sound intervention and with the three types of sound interventions was compared. <xref ref-type="fig" rid="F5">Figure 5</xref> shows how average valence changed from 20 to 80 s (error bars represent the 95% confidence interval). Valence was higher for the sound interventions than for the no-sound intervention. The valence of birdsong had the greatest drop (from &#x02212;0.085 to &#x02212;0.147), followed by music (from &#x02212;0.047 to &#x02212;0.083). The valence of music was always the highest at 20 s (&#x02212;0.047) and 60 s (&#x02212;0.081); the valence of the sound of a stream dropped from 20 s (&#x02212;0.068) to 60 s (&#x02212;0.083) but has the highest valence at 80 s (&#x02212;0.068). The valence for the no-sound intervention increased from 20 s (&#x02212;0.161) to 60 s (&#x02212;0.158), then decreased again at 80 s (&#x02212;0.174). To determine the difference between sound interventions, ANOVA was carried out. Significance at 20, 40, 60, and 80 s was 0.001, 0.001, 0.003, and 0.001, respectively; this indicates that, after intervention for 20, 40, 60, and 80 s, the type of sound source had a significant effect on the facial expressions of elderly with dementia.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>The valence for the different intervention sound types and the no-sound intervention at 20 s <bold>(A)</bold>, 40 s <bold>(B)</bold>, 60 s <bold>(C)</bold>, and 80 s <bold>(D)</bold>. The error bars show 95% CIs. NS, no sound; M, music; S, stream; and B, birdsong.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0005.tif"/>
</fig>
<p>To test the difference between various sound source types, a multiple comparison analysis was also carried out. As <xref ref-type="table" rid="T4">Table 4</xref> shows, the biggest difference in valence was between no sound and music sound, with an average difference of 0.769 at 60 s (<italic>p</italic> = 0.001), followed by the difference between no sound and stream, with an average difference of 0.746 at 60 s (<italic>p</italic> = 0.008). The average difference in valence between birdsong and no sound was significant at 20 s (<italic>p</italic> = 0.049) and 40 s (<italic>p</italic> = 0.038), but non-significant at 60 s and 80 s. In addition, valence was mostly similar between music and stream. However, valence differed significantly between music and birdsong at 60 s (average difference = 0.059, <italic>p</italic> = 0.025) and between stream and birdsong at 80 s (average difference = 0.794, <italic>p</italic> = 0.029). The valence results in <xref ref-type="fig" rid="F5">Figure 5</xref> and <xref ref-type="table" rid="T4">Table 4</xref> show that the interventions with sound sources have a positive effect on the valence of facial expressions of older adults with dementia compared with the no-sound source interventions. In addition, there are differences in valence between the sound source types at different time points in the intervention, which indicates that FaceReader can identify differences in the emotional responses of older people with dementia to different intervention sound sources.</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p>The average difference in valence between different sound source types at 20, 40, 60, and 80 s during the intervention.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Intervention time point</bold></th>
<th valign="top" align="center"><bold>NS&#x00026;M</bold></th>
<th valign="top" align="center"><bold>NS&#x00026;S</bold></th>
<th valign="top" align="center"><bold>NS&#x00026;B</bold></th>
<th valign="top" align="center"><bold>M&#x00026;S</bold></th>
<th valign="top" align="center"><bold>M&#x00026;B</bold></th>
<th valign="top" align="center"><bold>S&#x00026;B</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">20 s</td>
<td valign="top" align="center" style="background-color:#939598">&#x02212;0.113<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">&#x02212;0.096<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">&#x02212;0.075<xref ref-type="table-fn" rid="TN4"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.021</td>
<td valign="top" align="center">0.017</td>
<td valign="top" align="center">0.170</td>
</tr>
<tr>
<td valign="top" align="left">40 s</td>
<td valign="top" align="center" style="background-color:#939598">&#x02212;0.081<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">&#x02212;0.080<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">&#x02212;0.051<xref ref-type="table-fn" rid="TN4"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">0.038</td>
<td valign="top" align="center">0.029</td>
</tr>
<tr>
<td valign="top" align="left">60 s</td>
<td valign="top" align="center" style="background-color:#939598">&#x02212;0.769<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">&#x02212;0.746<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.018</td>
<td valign="top" align="center">0.022</td>
<td valign="top" align="center" style="background-color:#939598">0.059<xref ref-type="table-fn" rid="TN4"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.057</td>
</tr>
<tr>
<td valign="top" align="left">80 s</td>
<td valign="top" align="center" style="background-color:#939598">&#x02212;0.092<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">&#x02212;0.106<xref ref-type="table-fn" rid="TN3"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">&#x02212;0.289</td>
<td valign="top" align="center">-0.145</td>
<td valign="top" align="center">0.648</td>
<td valign="top" align="center" style="background-color:#939598">0.794<xref ref-type="table-fn" rid="TN4"><sup>&#x0002A;</sup></xref></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="TN3"><label>&#x0002A;&#x0002A;</label><p><italic>p &#x0003C; 0.01</italic>,</p></fn>
<fn id="TN4"><label>&#x0002A;</label><p><italic>p &#x0003C; 0.05; NS, no sound; M, music; S, stream; and B, birdsong</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>By performing linear, quadratic, or tertiary regression analysis on each intervention, <xref ref-type="fig" rid="F6">Figure 6</xref> shows the trend of valence over time for each of the three sound interventions. Valence changed significantly over time for music (<italic>p</italic> = 0.001) and birdsong (<italic>p</italic> = 0.016) sound interventions but the valence for the sound of a stream intervention did not change. In the music intervention, valence decreased at around 60 s by 0.058 and then recovered slightly. In the birdsong intervention, valence dropped by 0.091 from 0 to 40 s, then rose by 0.138 until 100 s, before subsequently declining again. These results demonstrate that FaceReader can reflect how the facial expressions of elderly with dementia change over time.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>The relationship between valence and time for the sound interventions of music <bold>(A)</bold>, stream <bold>(B)</bold>, and birdsong <bold>(C)</bold>.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0006.tif"/>
</fig>
</sec>
<sec>
<title>The Influence of Sound Interventions on Facial Expression Indices</title>
<p><xref ref-type="fig" rid="F7">Figure 7</xref> shows the differences in facial expression indices of the participants between the three types of sound sources. Sadness (mean = 0.036, <italic>SD</italic> = 0.015), fear (mean = 0.049, <italic>SD</italic> = 0.022), and disgust (mean = 0.042, <italic>SD</italic> = 0.021) all differed significantly between interventions (<italic>p</italic> &#x0003C; 0.01), whereas happiness (<italic>p</italic> = 0.081), surprise (<italic>p</italic> = 0.503), and anger (<italic>p</italic> = 0.071) did not. Therefore, facial expression indices of sadness, fear, and disgust were selected to analyze the impacts of different sound interventions.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>The effect of each sound source type on different facial expression indices: <bold>(A)</bold> happiness, <bold>(B)</bold> sadness, <bold>(C)</bold> anger, <bold>(D)</bold> surprise, <bold>(E)</bold> fear, <bold>(F)</bold> disgust. Error bars show 95% CIs. M, music; S, stream; B, birdsong.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0007.tif"/>
</fig>
<p><xref ref-type="fig" rid="F8">Figure 8</xref> shows the results of linear, quadratic, and cubic regression analyses for the facial expression indices of sadness, fear, and disgust. All three expression indices were significantly affected by time except for disgust with the sound of a stream (<italic>p</italic> = 0.920) and for fear with the birdsong intervention (<italic>p</italic> = 0.682).</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>The relationship between facial expression indices&#x02014;<bold>(A)</bold> sadness, <bold>(B)</bold> fear, and <bold>(C)</bold> disgust&#x02014;and time for different sound interventions. M, music; S, stream; and B, birdsong.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0008.tif"/>
</fig>
<p>Focusing first on sadness, <xref ref-type="fig" rid="F8">Figure 8A</xref> shows that, for the music intervention, sadness expression increased by 0.015 from 0 to 40 s before gradually decreasing. For the stream sound, sadness dropped by 0.003 from 0 to 20 s and then gradually rose by 0.014 until 80 s, before subsequently decreasing again. For birdsong, sadness gradually increased over time (by &#x0007E;0.01 every 20 s).</p>
<p>Turning to fear, <xref ref-type="fig" rid="F8">Figure 8B</xref> shows a gradual rise from 0 to 50 s (0.009 every 20 s) for the music intervention and then a decrease of 0.007 from 50 to 80 s, followed by a linear rise (of 0.002 every 20 s). For the stream sound, fear expression increased by 0.021 in the first 60 s and then decreased by 0.035 from 60 to 120 s. For birdsong, the fear expression did not change significantly over time (<italic>p</italic> = 0.682).</p>
<p>Regarding disgust, <xref ref-type="fig" rid="F8">Figure 8C</xref> shows a rapid rise of 0.028 from 0 to 40 s for the music intervention and then a slow drop of 0.008 from 40 to 100 s. For birdsong, disgust increased by 0.06 from 0 s to about 20 s and then showed a linear downward trend, with an average decrease of 0.03 every 20 s. For the stream sound, disgust expression did not change significantly over time (<italic>p</italic> = 0.920).</p>
<p>The above results show that, under different sound interventions, sadness, fear, and disgust are all significantly affected by time. Therefore, in the study of emotions in older adults with dementia, these facial expression indices can be used to evaluate the effects of emotional intervention.</p>
</sec>
<sec>
<title>The Influence of Time on Facial Expressions of Elderly With Dementia</title>
<p>To explore the influence of intervention duration on facial expressions, we conducted a further set of experiments in which the intervention sound (music) began to be played 2 min before the exercise started (advance group) and at the beginning of the exercise (normal group). The other experimental steps were unchanged.</p>
<p><xref ref-type="fig" rid="F9">Figure 9A</xref> shows that valence significantly differed between the advance group and the normal group at 20 s (<italic>p</italic> = 0.003), 40 s (<italic>p</italic> = 0.021), 60 s (<italic>p</italic> = 0.019), and 80 s (<italic>p</italic> = 0.019). In addition, valence in the advance group (mean = &#x02212;0.156, &#x02212;0.156, &#x02212;0.119, &#x02212;0.120) was significantly lower than that in the normal group (mean = &#x02212;0.047, &#x02212;0.07, &#x02212;0.081, &#x02212;0.082) at the four respective time points.</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p>Differences in <bold>(A)</bold> valence and <bold>(B)</bold> arousal for music intervention of different durations.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0009.tif"/>
</fig>
<p>In terms of arousal, <xref ref-type="fig" rid="F9">Figure 9B</xref> shows significant differences between the advance group and the normal group at 20 s (<italic>p</italic> = 0.001), 40 s (<italic>p</italic> = 0.003), 60 s (<italic>p</italic> = 0.001), and 80 s (<italic>p</italic> = 0.001). Moreover, arousal in the advance group (mean = 0.315, 0.304, 0.308, 0.298) was significantly lower than in the normal group (mean = 0.444, 0.401, 0.378, 0.361) at the four respective time points.</p>
</sec>
<sec>
<title>The Influence of Intervention Duration on Facial Expression Indices</title>
<p><xref ref-type="fig" rid="F10">Figure 10</xref> shows the relationship between the intervention duration (advance group and normal group) and the six facial expression indices. There are significant differences between the advance and normal groups in happiness (<italic>p</italic> = 0.002), fear (<italic>p</italic> = 0.018), and surprise (<italic>p</italic> = 0.001).</p>
<fig id="F10" position="float">
<label>Figure 10</label>
<caption><p>Differences in facial expression indices&#x02014;<bold>(A)</bold> happiness, <bold>(B)</bold> sadness, <bold>(C)</bold> anger, <bold>(D)</bold> surprise, <bold>(E)</bold> fear, <bold>(F)</bold> disgust&#x02014;for different intervention durations.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0010.tif"/>
</fig>
<p><xref ref-type="fig" rid="F10">Figure 10A</xref> shows that happiness expression was significantly lower in the advance group (mean = 0.022, SD = 0.024) than in the normal group (mean = 0.035, <italic>SD</italic> = 0.026), with the largest difference at 80 s. <xref ref-type="fig" rid="F10">Figure 10D</xref> shows that surprise expression was also significantly lower in the advance group (mean = 0.021, <italic>SD</italic> = 0.026) than in the normal group (mean = 0.039, <italic>SD</italic> = 0.032), with the largest difference at 60 s. <xref ref-type="fig" rid="F10">Figure 10E</xref> shows that fear expression in the advance group (mean = 0.038, <italic>SD</italic> = 0.036) was significantly lower than that in the normal group (mean = 0.060, <italic>SD</italic> = 0.071), most substantially at 80 s.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>To decide whether and how FaceReader can take the place of questionnaires as a tool in sound perception research, the results of these two methods should be discussed. As mentioned earlier, FaceReader can recognize the facial expressions of older people with dementia. Whether FaceReader can replace the subjective evaluation scale as a tool for emotional research in older people with dementia is discussed. Bivariate Pearson correlations were used to analyze the relationship between the subjective emotional evaluation of the participant and facial expression valence, reporting the effect size of Cohen&#x00027;<italic>d</italic> (<xref ref-type="table" rid="T5">Table 5</xref>). Based on the sign of r, the valence of facial expressions is positively correlated with the subjective evaluation of pleasure, arousal, superiority, and acoustic comfort. In the music intervention, pleasure (r ranging from 0460 to 0.679) is significantly correlated with valence at all four-time points. Dominance (r ranging from 0.282 to 0.600) and acoustic comfort (r ranging from 0.202 to 0.545) were significantly correlated with valence at 60 s and 80 s, while arousal (<italic>r</italic> = 0.468) was significantly correlated with valence at 20 s. For the sound of a stream, valence change in the first 60 s can be used to predict arousal (r ranging from 0.061 to 0.866), pleasure (r ranging from 0.021 to 0.0762), and acoustic comfort (r ranging from 0.102 to 0.760), while dominance can be reflected by valence change at 20 s (<italic>r</italic> = 0.790). In the birdsong intervention, valence change in the first 60 s can be used to predict pleasure (<italic>r</italic> = 0.830), arousal (<italic>r</italic> = 0.891), and acoustic comfort (r ranging from 0.769 to 0.907), while dominance (<italic>r</italic> = 0.824) can be represented by valence at 20 s.</p>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p>The relationship between subjective emotional evaluation and facial expression valence in three sound interventions at 20, 40, 60, and 80.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Sound source type</bold></th>
<th/>
<th valign="top" align="center" colspan="4" style="border-bottom: thin solid #000000;"><bold>Subjective emotional evaluation</bold></th>
</tr>
<tr>
<th/>
<th valign="top" align="left"><bold>Time</bold></th>
<th valign="top" align="center"><bold>Pleasure</bold></th>
<th valign="top" align="center"><bold>Arousal</bold></th>
<th valign="top" align="center"><bold>Dominance</bold></th>
<th valign="top" align="center"><bold>Acoustic comfort</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Music</td>
<td valign="top" align="left">20 s</td>
<td valign="top" align="center" style="background-color:#939598">0.601<xref ref-type="table-fn" rid="TN5"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">0.468<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.282</td>
<td valign="top" align="center">0.202</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">40 s</td>
<td valign="top" align="center" style="background-color:#939598">0.679<xref ref-type="table-fn" rid="TN5"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.406</td>
<td valign="top" align="center">0.431</td>
<td valign="top" align="center">0.374</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">60 s</td>
<td valign="top" align="center" style="background-color:#939598">0.565<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.354</td>
<td valign="top" align="center" style="background-color:#939598">0.535<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">0.475<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="left">80 s</td>
<td valign="top" align="center" style="background-color:#939598">0.460<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.343</td>
<td valign="top" align="center" style="background-color:#939598">0.600<xref ref-type="table-fn" rid="TN5"><sup>&#x0002A;&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">0.545<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td valign="top" align="left">Stream sound</td>
<td valign="top" align="left">20 s</td>
<td valign="top" align="center">0.021</td>
<td valign="top" align="center">0.061</td>
<td valign="top" align="center" style="background-color:#939598">0.790<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.102</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">40 s</td>
<td valign="top" align="center" style="background-color:#939598">0.770<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.749</td>
<td valign="top" align="center">0.236</td>
<td valign="top" align="center" style="background-color:#939598">0.766<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="left">60 s</td>
<td valign="top" align="center" style="background-color:#939598">0.762<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">0.772<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.192</td>
<td valign="top" align="center" style="background-color:#939598">0.760<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="left">80 s</td>
<td valign="top" align="center">0.692</td>
<td valign="top" align="center" style="background-color:#939598">0.866<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.064</td>
<td valign="top" align="center">0.697</td>
</tr>
<tr>
<td valign="top" align="left">Birdsong</td>
<td valign="top" align="left">20 s</td>
<td valign="top" align="center">0.727</td>
<td valign="top" align="center" style="background-color:#939598">0.891<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">0.824<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.769</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">40 s</td>
<td valign="top" align="center">0.768</td>
<td valign="top" align="center">0.760</td>
<td valign="top" align="center">0.588</td>
<td valign="top" align="center" style="background-color:#939598">0.862<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="left">60 s</td>
<td valign="top" align="center" style="background-color:#939598">0.830<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center" style="background-color:#939598">0.844<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
<td valign="top" align="center">0.651</td>
<td valign="top" align="center" style="background-color:#939598">0.860<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
</tr>
<tr>
<td/>
<td valign="top" align="left">80 s</td>
<td valign="top" align="center">0.796</td>
<td valign="top" align="center">0.514</td>
<td valign="top" align="center">0.400</td>
<td valign="top" align="center" style="background-color:#939598">0.907<xref ref-type="table-fn" rid="TN6"><sup>&#x0002A;</sup></xref></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="TN5"><label>&#x0002A;&#x0002A;</label><p><italic>p &#x0003C; 0.01</italic>,</p></fn>
<fn id="TN6"><label>&#x0002A;</label><p><italic>p &#x0003C; 0.01</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>In terms of individual differences, first, the point-biserial correlation analysis revealed that gender and facial expression are not significantly correlated in the music and birdsong interventions. This is consistent with previous research conclusions reached from evaluating acoustic environment using questionnaires (Meng et al., <xref ref-type="bibr" rid="B40">2020a</xref>). However, a significant correlation was found between facial expressions and gender at 20 s in the stream sound intervention, with valence significantly higher among women than men (<italic>r</italic> = 0.869, <italic>p</italic> = 0.011). This suggests that the sound of a stream can more easily elevate the emotions of women. Regarding age, the results of the bivariate Pearson correlation analysis show a negative correlation between age and facial expression valence (<italic>r</italic> = &#x02212;0.467, <italic>p</italic> = 0.044) at 80 s for the music intervention but a positive correlation at 20 s for the stream sound (<italic>r</italic> = &#x02212;0.756, <italic>p</italic> = 0.049). However, there was no correlation between age and valence for the birdsong intervention. Finally, we found no correlation between the MMES score and facial expression valence for any of the three sound types.</p>
<p>In terms of intervention days, <xref ref-type="fig" rid="F11">Figure 11</xref> shows the mean valence of each day for different sound sources for over 5 days of intervention. Repeated measurement variance analysis reveals that the valence for music intervention on the third day (mean = &#x02212;0.098, <italic>SD</italic> = 0.014) and fourth day (mean = &#x02212;0.104, <italic>SD</italic> = 0.015) was significantly higher (<italic>p</italic> = 0.007) than that on the fifth day (mean = &#x02212;0.126, <italic>SD</italic> = 0.016). For the stream sound, valence on the fourth day (mean = &#x02212;0.099, <italic>SD</italic> = 0.046) was significantly lower (<italic>p</italic> = 0.041) than that on the third day (mean = &#x02212;0.356, <italic>SD</italic> = 0.017) and the fifth day (mean = &#x02212;0.023, <italic>SD</italic> = 0.029). For birdsong, however, there were no significant differences in valence between the days (<italic>p</italic> = 0.094). The above results indicate that the facial expressions of elderly with dementia are affected by the number of intervention days for two of the three sound sources. Therefore, in studying how sound interventions affect the mood of older people with dementia, the number of days of the intervention should be considered.</p>
<fig id="F11" position="float">
<label>Figure 11</label>
<caption><p>Changes in valence over 5 days of experiment for different sound interventions. <bold>(A)</bold> music, <bold>(B)</bold> stream, <bold>(C)</bold> birdsong.</p></caption>
<graphic xlink:href="fpsyg-12-707809-g0011.tif"/>
</fig>
<p>In a field experiment studying the emotions of elderly with dementia, various factors may affect the facial expressions of the participant, such as vision, smell, and the mood of the care partner. This makes it somewhat difficult to recognize emotions through only facial expressions. Acknowledging this limitation, the aim of this research was to verify the effectiveness of FER in emotion recognition for older people with dementia.</p>
</sec>
<sec sec-type="conclusions" id="s5">
<title>Conclusions</title>
<p>This study proposes FaceReader as a potential method for evaluating the impact of sound interventions on emotions in older people with dementia. Through field experiments with 35 participants, the following conclusions were drawn.</p>
<p>First, FaceReader can identify differences in the emotional responses of older people with dementia using different types of sound interventions. Among the three sound sources, music showed the most positive effects on the mood of older adults with dementia. The effects of music, birdsong, and the sound of a stream were higher than that with no sound source. The facial expression indices of sadness, fear, and disgust also differed significantly between sound sources, while happiness, surprise, and anger did not.</p>
<p>Second, the sound and activity started simultaneously had a more positive influence on the mood of older adults with dementia than when playing the sound before the activity started, especially under the intervention of music and streams. Regarding intervention days, only music and stream sound showed significant differences in the effect between different dates. Birdsong also had differences in effect, but those differences were not significant. This shows that, when using FaceReader to measure the impact of sound interventions on emotions in elderly with dementia, more than one intervention must be performed to obtain accurate and reliable results.</p>
<p>The comparison of results from FaceReader and the subjective evaluation scale shows that facial expression valence can predict pleasure, arousal, dominance, and acoustic comfort.</p>
<p>In terms of gender, the sound of a stream more easily elevated the emotions in women than in men. In terms of age, only under the intervention of music and stream sound was age related to the emotions of older adults with dementia. Regardless of the sound source, no correlations were found between facial expression valence and MMSE scores.</p>
</sec>
<sec sec-type="data-availability-statement" id="s6">
<title>Data Availability Statement</title>
<p>The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.</p>
</sec>
<sec id="s7">
<title>Ethics Statement</title>
<p>Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="s8">
<title>Author Contributions</title>
<p>YL: conceptualization, validation, writing&#x02014;review and editing, supervision, and funding acquisition. GY: methodology and formal analysis. ZW: investigation, data curation, and writing&#x02014;original draft preparation. All authors have read and agreed to the published version of the manuscript.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s9">
<title>Publisher&#x00027;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ailun</surname> <given-names>Z.</given-names></name> <name><surname>Zhemin</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>Alzheimer&#x00027;s disease patients with music therapy</article-title>. <source>China J. Health Psychol</source>. <volume>26</volume>, <fpage>155</fpage>&#x02013;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.13342/j.cnki.cjhp.2018.01.041</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Altomonte</surname> <given-names>S.</given-names></name> <name><surname>Rutherford</surname> <given-names>P.</given-names></name> <name><surname>Wlson</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>Indoor environmental quality: lighting and acoustics</article-title>. <source>Encycl. Sust. Technol.</source> <volume>2</volume>, <fpage>221</fpage>&#x02013;<lpage>229</lpage>. <pub-id pub-id-type="doi">10.1016/B978-0-12-409548-9.10196-4</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amor</surname> <given-names>B.</given-names></name> <name><surname>Drira</surname> <given-names>H.</given-names></name> <name><surname>Berretti</surname> <given-names>S.</given-names></name> <name><surname>Daoudi</surname> <given-names>M.</given-names></name> <name><surname>Srivastava</surname> <given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>4-D facial expression recognition by learning geometric deformations</article-title>. <source>IEEE Trans. Cybern</source>. <volume>44</volume>, <fpage>2443</fpage>&#x02013;<lpage>2457</lpage>. <pub-id pub-id-type="doi">10.1109/TCYB.2014.2308091</pub-id><pub-id pub-id-type="pmid">25415949</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Axelsson</surname> <given-names>O.</given-names></name> <name><surname>Nilsson</surname> <given-names>M. E.</given-names></name> <name><surname>Berglund</surname> <given-names>B.</given-names></name></person-group> (<year>2010</year>). <article-title>A principal components model of soundscape perception</article-title>. <source>J. Acoust. Soc. Am</source>. <volume>128</volume>, <fpage>2836</fpage>&#x02013;<lpage>2846</lpage>. <pub-id pub-id-type="doi">10.1121/1.3493436</pub-id><pub-id pub-id-type="pmid">21110579</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Backs</surname> <given-names>R. W.</given-names></name> <name><surname>da Silva</surname> <given-names>S. P.</given-names></name> <name><surname>Han</surname> <given-names>K.</given-names></name></person-group> (<year>2005</year>). <article-title>A comparison of younger and older adults&#x00027; self-assessment manikin ratings of affective pictures</article-title>. <source>Exp. Aging Res</source>. <volume>31</volume>, <fpage>421</fpage>&#x02013;<lpage>440</lpage>. <pub-id pub-id-type="doi">10.1080/03610730500206808</pub-id><pub-id pub-id-type="pmid">16147461</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bartlett</surname> <given-names>M. S.</given-names></name> <name><surname>Littlewort</surname> <given-names>G.</given-names></name> <name><surname>Frank</surname> <given-names>M.</given-names></name> <name><surname>Lainscsek</surname> <given-names>C.</given-names></name> <name><surname>Fasel</surname> <given-names>I.</given-names></name> <name><surname>Movellan</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>&#x0201C;Recognizing facial expression: machine learning and application to spontaneous behavior,&#x0201D;</article-title> in <source>2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol 2</source>, eds <person-group person-group-type="editor"><name><surname>Schmid</surname> <given-names>C.</given-names></name> <name><surname>Soatto</surname> <given-names>S.</given-names></name> <name><surname>Tomasi</surname> <given-names>C.</given-names></name></person-group> (<publisher-loc>San Diego, CA</publisher-loc>), <fpage>568</fpage>&#x02013;<lpage>573</lpage>.</citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blessing</surname> <given-names>A.</given-names></name> <name><surname>Keil</surname> <given-names>A.</given-names></name> <name><surname>Linden</surname> <given-names>D. E. J.</given-names></name> <name><surname>Heim</surname> <given-names>S.</given-names></name> <name><surname>Ray</surname> <given-names>W. J.</given-names></name></person-group> (<year>2006</year>). <article-title>Acquisition of affective dispositions in dementia patients</article-title>. <source>Neuropsychologia</source> <volume>44</volume>, <fpage>2366</fpage>&#x02013;<lpage>2373</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2006.05.004</pub-id><pub-id pub-id-type="pmid">16777148</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blessing</surname> <given-names>A.</given-names></name> <name><surname>Zoellig</surname> <given-names>J.</given-names></name> <name><surname>Dammann</surname> <given-names>G.</given-names></name> <name><surname>Martin</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Implicit learning of affective responses in dementia patients: a face-emotion-association paradigm</article-title>. <source>Aging Neuropsychol. Cogn</source>. <volume>17</volume>, <fpage>633</fpage>&#x02013;<lpage>647</lpage>. <pub-id pub-id-type="doi">10.1080/13825585.2010.483065</pub-id><pub-id pub-id-type="pmid">20544414</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bradley</surname> <given-names>M. M.</given-names></name> <name><surname>Lang</surname> <given-names>P. J.</given-names></name></person-group> (<year>1994</year>). <article-title>Measuring emotion: the self-assessment manikin and the semantic differential</article-title>. <source>J. Behav. Ther. Exp. Psychiatry</source> <volume>25</volume>, <fpage>49</fpage>&#x02013;<lpage>59</lpage>. <pub-id pub-id-type="doi">10.1016/0005-7916(94)90063-9</pub-id><pub-id pub-id-type="pmid">7962581</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brown</surname> <given-names>A. L.</given-names></name> <name><surname>Kang</surname> <given-names>J. A.</given-names></name> <name><surname>Gjestland</surname> <given-names>T.</given-names></name></person-group> (<year>2011</year>). <article-title>Towards standardization in soundscape preference assessment</article-title>. <source>Appl. Acoust</source>. <volume>72</volume>, <fpage>387</fpage>&#x02013;<lpage>392</lpage>. <pub-id pub-id-type="doi">10.1016/j.apacoust.2011.01.001</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cootes</surname> <given-names>T. F.</given-names></name> <name><surname>Taylor</surname> <given-names>C. J.</given-names></name></person-group> (<year>2000</year>). <source>Statistical models of appearance for computer vision (Dissertation)</source>, <publisher-name>University of Manchester</publisher-name>, <publisher-loc>Manchester, United Kingdom</publisher-loc>.</citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cuddy</surname> <given-names>L. L.</given-names></name> <name><surname>Sikka</surname> <given-names>R.</given-names></name> <name><surname>Silveira</surname> <given-names>K.</given-names></name> <name><surname>Bai</surname> <given-names>S.</given-names></name> <name><surname>Vanstone</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>Music-evoked autobiographical memories (MEAMs) in Alzheimer disease: evidence for a positivity effect</article-title>. <source>Cogent Psychol</source>. <volume>4</volume>:<fpage>1277578</fpage>. <pub-id pub-id-type="doi">10.1080/23311908.2016.1277578</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cuddy</surname> <given-names>L. L.</given-names></name> <name><surname>Sikka</surname> <given-names>R.</given-names></name> <name><surname>Vanstone</surname> <given-names>A.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Preservation of musical memory and engagement in healthy aging and Alzheimer&#x00027;s disease</article-title>. <source>Ann. N. Y. Acad. Sci</source>. <volume>1337</volume>, <fpage>223</fpage>&#x02013;<lpage>231</lpage>. <pub-id pub-id-type="doi">10.1111/nyas.12617</pub-id><pub-id pub-id-type="pmid">25773638</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Danling</surname> <given-names>P.</given-names></name></person-group> (<year>2001</year>). <source>General Psychology</source>. <publisher-loc>Beijing</publisher-loc>: <publisher-name>Beijing Normal University Press</publisher-name>.</citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>El Haj</surname> <given-names>M.</given-names></name> <name><surname>Antoine</surname> <given-names>P.</given-names></name> <name><surname>Nandrino</surname> <given-names>J. L.</given-names></name> <name><surname>Gely-Nargeot</surname> <given-names>M. C.</given-names></name> <name><surname>Raffard</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Self-defining memories during exposure to music in Alzheimer&#x00027;s disease</article-title>. <source>Int. Psychogeriatr</source>. <volume>27</volume>, <fpage>1719</fpage>&#x02013;<lpage>1730</lpage>. <pub-id pub-id-type="doi">10.1017/S1041610215000812</pub-id><pub-id pub-id-type="pmid">26018841</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fraile</surname> <given-names>E.</given-names></name> <name><surname>Bernon</surname> <given-names>D.</given-names></name> <name><surname>Rouch</surname> <given-names>I.</given-names></name> <name><surname>Pongan</surname> <given-names>E.</given-names></name> <name><surname>Tillmann</surname> <given-names>B.</given-names></name> <name><surname>Leveque</surname> <given-names>Y.</given-names></name></person-group> (<year>2019</year>). <article-title>The effect of learning an individualized song on autobiographical memory recall in individuals with Alzheimer&#x00027;s disease: a pilot study</article-title>. <source>J. Clin. Exp. Neuropsychol</source>. <volume>41</volume>, <fpage>760</fpage>&#x02013;<lpage>768</lpage>. <pub-id pub-id-type="doi">10.1080/13803395.2019.1617837</pub-id><pub-id pub-id-type="pmid">31142196</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Frijda</surname> <given-names>N. H.</given-names></name></person-group> (<year>1986</year>). <source>The Emotions.</source> <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Garre-Olmo</surname> <given-names>J.</given-names></name> <name><surname>Lopez-Pousa</surname> <given-names>S.</given-names></name> <name><surname>Turon-Estrada</surname> <given-names>A.</given-names></name> <name><surname>Juvinya</surname> <given-names>D.</given-names></name> <name><surname>Ballester</surname> <given-names>D.</given-names></name> <name><surname>Vilalta-Franch</surname> <given-names>J.</given-names></name></person-group> (<year>2012</year>). <article-title>Environmental determinants of quality of life in nursing home residents with severe dementia</article-title>. <source>J. Am. Geriatr. Soc</source>. <volume>60</volume>, <fpage>1230</fpage>&#x02013;<lpage>1236</lpage>. <pub-id pub-id-type="doi">10.1111/j.1532-5415.2012.04040.x</pub-id><pub-id pub-id-type="pmid">22702541</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gomez-Romero</surname> <given-names>M.</given-names></name> <name><surname>Jimenez-Palomares</surname> <given-names>M.</given-names></name> <name><surname>Rodriguez-Mansilla</surname> <given-names>J.</given-names></name> <name><surname>Flores-Nieto</surname> <given-names>A.</given-names></name> <name><surname>Garrido-Ardila</surname> <given-names>E. M.</given-names></name> <name><surname>Lopez-Arza</surname> <given-names>M. V. G.</given-names></name></person-group> (<year>2017</year>). <article-title>Benefits of music therapy on behaviour disorders in subjects diagnosed with dementia: a systematic review</article-title>. <source>Neurologia</source> <volume>32</volume>, <fpage>253</fpage>&#x02013;<lpage>263</lpage>. <pub-id pub-id-type="doi">10.1016/j.nrleng.2014.11.003</pub-id><pub-id pub-id-type="pmid">25553932</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hadinejad</surname> <given-names>A.</given-names></name> <name><surname>Moyle</surname> <given-names>B. D.</given-names></name> <name><surname>Scott</surname> <given-names>N.</given-names></name> <name><surname>Kralj</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Emotional responses to tourism advertisements: the application of FaceReader (TM)</article-title>. <source>Tour. Recreat. Res</source>. <volume>44</volume>, <fpage>131</fpage>&#x02013;<lpage>135</lpage>. <pub-id pub-id-type="doi">10.1080/02508281.2018.1505228</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harding</surname> <given-names>A. H.</given-names></name> <name><surname>Frost</surname> <given-names>G. A.</given-names></name> <name><surname>Tan</surname> <given-names>E.</given-names></name> <name><surname>Tsuchiya</surname> <given-names>A.</given-names></name> <name><surname>Mason</surname> <given-names>H. M.</given-names></name></person-group> (<year>2013</year>). <article-title>The cost of hypertension-related ill-health attributable to environmental noise</article-title>. <source>Noise Health</source> <volume>15</volume>, <fpage>437</fpage>&#x02013;<lpage>445</lpage>. <pub-id pub-id-type="doi">10.4103/1463-1741.121253</pub-id><pub-id pub-id-type="pmid">24231422</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harrison</surname> <given-names>G. W.</given-names></name></person-group> (<year>2016</year>). <article-title>Field experiments and methodological intolerance: reply</article-title>. <source>J. Econ. Methodol</source>. <volume>23</volume>, <fpage>157</fpage>&#x02013;<lpage>159</lpage>. <pub-id pub-id-type="doi">10.1080/1350178X.2016.1158948</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hong</surname> <given-names>J. Y.</given-names></name> <name><surname>Ong</surname> <given-names>Z. T.</given-names></name> <name><surname>Lam</surname> <given-names>B.</given-names></name> <name><surname>Ooi</surname> <given-names>K.</given-names></name> <name><surname>Gan</surname> <given-names>W. S.</given-names></name> <name><surname>Kang</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Effects of adding natural sounds to urban noises on the perceived loudness of noise and soundscape quality</article-title>. <source>Sc. Total Environ</source>. <volume>711</volume>:<fpage>134571</fpage>. <pub-id pub-id-type="doi">10.1016/j.scitotenv.2019.134571</pub-id><pub-id pub-id-type="pmid">32000311</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Howe</surname> <given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>Designing and delivering dementia services</article-title>. <source>Australas. J. Ageing</source> <volume>1</volume>, <fpage>67</fpage>&#x02013;<lpage>68</lpage>. <pub-id pub-id-type="doi">10.1111/ajag.12146</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jensen</surname> <given-names>L.</given-names></name> <name><surname>Padilla</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>Effectiveness of environment-based interventions that address behavior, perception, and falls in people with alzheimer&#x00027;s disease and related major neurocognitive disorders: a systematic review</article-title>. <source>Am. J. Occup. Ther</source>. <volume>71</volume>:<fpage>514</fpage>&#x02013;<lpage>522</lpage>. <pub-id pub-id-type="doi">10.5014/ajot.2017.027409</pub-id><pub-id pub-id-type="pmid">28809653</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jia</surname> <given-names>L.</given-names></name> <name><surname>Quan</surname> <given-names>M.</given-names></name> <name><surname>Fu</surname> <given-names>Y.</given-names></name> <name><surname>Zhao</surname> <given-names>T.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Wei</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Dementia in China: epidemiology, clinical management, and research advances</article-title>. <source>Lancet Neurol</source>. <volume>19</volume>, <fpage>81</fpage>&#x02013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1016/S1474-4422(19)30290-X</pub-id><pub-id pub-id-type="pmid">31494009</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kaneko</surname> <given-names>Y.</given-names></name> <name><surname>Butler</surname> <given-names>J. P.</given-names></name> <name><surname>Saitoh</surname> <given-names>E.</given-names></name> <name><surname>Horie</surname> <given-names>T.</given-names></name> <name><surname>Fujii</surname> <given-names>M.</given-names></name> <name><surname>Sasaki</surname> <given-names>H.</given-names></name></person-group> (<year>2013</year>). <article-title>Efficacy of white noise therapy for dementia patients with schizophrenia</article-title>. <source>Geriatr. Gerontol. Int</source>. <volume>13</volume>, <fpage>808</fpage>&#x02013;<lpage>810</lpage>. <pub-id pub-id-type="doi">10.1111/ggi.12028</pub-id><pub-id pub-id-type="pmid">23819634</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kang</surname> <given-names>J.</given-names></name> <name><surname>Hui</surname> <given-names>M.</given-names></name> <name><surname>Hui</surname> <given-names>X.</given-names></name> <name><surname>Yuan</surname> <given-names>Z.</given-names></name> <name><surname>Zhongzhe</surname> <given-names>L.</given-names></name></person-group> (<year>2020</year>). <article-title>Research progress on acoutic environments of healthy buildings</article-title>. <source>Chin. Sci. Bull</source>. <volume>65</volume>, <fpage>288</fpage>&#x02013;<lpage>299</lpage>. <pub-id pub-id-type="doi">10.1360/TB-2019-0465</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Korpela</surname> <given-names>K. M.</given-names></name> <name><surname>Ylen</surname> <given-names>M.</given-names></name> <name><surname>Tyrvainen</surname> <given-names>L.</given-names></name> <name><surname>Silvennoinen</surname> <given-names>H.</given-names></name></person-group> (<year>2008</year>). <article-title>Determinants of restorative experiences in everyday favorite places</article-title>. <source>Health Place</source> <volume>14</volume>, <fpage>636</fpage>&#x02013;<lpage>652</lpage>. <pub-id pub-id-type="doi">10.1016/j.healthplace.2007.10.008</pub-id><pub-id pub-id-type="pmid">18037332</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Leitch</surname> <given-names>K. A.</given-names></name> <name><surname>Duncan</surname> <given-names>S. E.</given-names></name> <name><surname>O&#x00027;Keefe</surname> <given-names>S.</given-names></name> <name><surname>Rudd</surname> <given-names>R.</given-names></name> <name><surname>Gallagher</surname> <given-names>D. L.</given-names></name></person-group> (<year>2015</year>). <article-title>Characterizing consumer emotional response to sweeteners using an emotion terminology questionnaire and facial expression analysis</article-title>. <source>Food Res. Int</source>. <volume>76</volume>, <fpage>283</fpage>&#x02013;<lpage>292</lpage>. <pub-id pub-id-type="doi">10.1016/j.foodres.2015.04.039</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lewinski</surname> <given-names>P.</given-names></name> <name><surname>den Uyl</surname> <given-names>T. M.</given-names></name> <name><surname>Butler</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). <article-title>Automated facial coding: validation of basic emotions and FACS AUs in FaceReader</article-title>. <source>J. Neurosci. Psychol. Econ</source>. <volume>7</volume>, <fpage>227</fpage>&#x02013;<lpage>236</lpage>. <pub-id pub-id-type="doi">10.1037/npe0000028</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>H. C.</given-names></name> <name><surname>Wang</surname> <given-names>H. H.</given-names></name> <name><surname>Lu</surname> <given-names>C. Y.</given-names></name> <name><surname>Chen</surname> <given-names>T. B.</given-names></name> <name><surname>Lin</surname> <given-names>Y. H.</given-names></name> <name><surname>Lee</surname> <given-names>I.</given-names></name></person-group> (<year>2019</year>). <article-title>The effect of music therapy on reducing depression in people with dementia: a systematic review and meta-analysis</article-title>. <source>Geriatr. Nurs</source>. <volume>40</volume>, <fpage>510</fpage>&#x02013;<lpage>516</lpage>. <pub-id pub-id-type="doi">10.1016/j.gerinurse.2019.03.017</pub-id><pub-id pub-id-type="pmid">31056209</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Z. Z.</given-names></name> <name><surname>Kang</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Sensitivity analysis of changes in human physiological indicators observed in soundscapes</article-title>. <source>Landsc. Urban Plan</source>. <volume>190</volume>:<fpage>103593</fpage>. <pub-id pub-id-type="doi">10.1016/j.landurbplan.2019.103593</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lints-Martindale</surname> <given-names>A. C.</given-names></name> <name><surname>Hadjistavropoulos</surname> <given-names>T.</given-names></name> <name><surname>Barber</surname> <given-names>B.</given-names></name> <name><surname>Gibson</surname> <given-names>S. J.</given-names></name></person-group> (<year>2007</year>). <article-title>A psychophysical investigation of the facial action coding system as an index of pain variability among older adults with and without Alzheimer&#x00027;s disease</article-title>. <source>Pain Med</source>. <volume>8</volume>, <fpage>678</fpage>&#x02013;<lpage>689</lpage>. <pub-id pub-id-type="doi">10.1111/j.1526-4637.2007.00358.x</pub-id><pub-id pub-id-type="pmid">18028046</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lipsey</surname> <given-names>M. W.</given-names></name> <name><surname>Wilson</surname> <given-names>D.</given-names></name></person-group> (<year>2000</year>). <source>Practical Meta-Analysis (Applied Social Research Methods)</source>, <edition>1st Edn.</edition> <publisher-loc>Los Angeles, CA</publisher-loc>: <publisher-name>SAGE Publications</publisher-name>.</citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lixiu</surname> <given-names>Z.</given-names></name> <name><surname>Hong</surname> <given-names>W.</given-names></name></person-group> (<year>2016</year>). <article-title>Study on the scale of self-assessment manikin in elderly patients with dementia</article-title>. <source>Chin. J. Nurs</source>. <volume>51</volume>, <fpage>231</fpage>&#x02013;<lpage>234</lpage>. <pub-id pub-id-type="doi">10.3761/j.issn.0254-1769.2016.02.018</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ma</surname> <given-names>K. W.</given-names></name> <name><surname>Wong</surname> <given-names>H. M.</given-names></name> <name><surname>Mak</surname> <given-names>C. M.</given-names></name></person-group> (<year>2017</year>). <article-title>Dental environmental noise evaluation and health risk model construction to dental professionals</article-title>. <source>Int. J. Environ. Res. Public Health</source> <volume>14</volume>:<fpage>1084</fpage>. <pub-id pub-id-type="doi">10.3390/ijerph14091084</pub-id><pub-id pub-id-type="pmid">28925978</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marquardt</surname> <given-names>G.</given-names></name> <name><surname>Bueter</surname> <given-names>K.</given-names></name> <name><surname>Motzek</surname> <given-names>T.</given-names></name></person-group> (<year>2014</year>). <article-title>Impact of the design of the built environment on people with dementia: an evidence-based review</article-title>. <source>Herd-Health Environ. Res. Des. J</source>. <volume>8</volume>, <fpage>127</fpage>&#x02013;<lpage>157</lpage>. <pub-id pub-id-type="doi">10.1177/193758671400800111</pub-id><pub-id pub-id-type="pmid">25816188</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meilan Garcia</surname> <given-names>J. J.</given-names></name> <name><surname>Iodice</surname> <given-names>R.</given-names></name> <name><surname>Carro</surname> <given-names>J.</given-names></name> <name><surname>Sanchez</surname> <given-names>J. A.</given-names></name> <name><surname>Palmero</surname> <given-names>F.</given-names></name> <name><surname>Mateos</surname> <given-names>A. M.</given-names></name></person-group> (<year>2012</year>). <article-title>Improvement of autobiographic memory recovery by means of sad music in Alzheimer&#x00027;s disease type dementia</article-title>. <source>Aging Clin. Exp. Res</source>. <volume>24</volume>, <fpage>227</fpage>&#x02013;<lpage>232</lpage>. <pub-id pub-id-type="doi">10.3275/7874</pub-id><pub-id pub-id-type="pmid">21778809</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meng</surname> <given-names>Q.</given-names></name> <name><surname>Hu</surname> <given-names>X. J.</given-names></name> <name><surname>Kang</surname> <given-names>J.</given-names></name> <name><surname>Wu</surname> <given-names>Y.</given-names></name></person-group> (<year>2020a</year>). <article-title>On the effectiveness of facial expression recognition for evaluation of urban sound perception</article-title>. <source>Sci. Total Environ</source>. <volume>710</volume>:<fpage>135484</fpage>. <pub-id pub-id-type="doi">10.1016/j.scitotenv.2019.135484</pub-id><pub-id pub-id-type="pmid">31780160</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meng</surname> <given-names>Q.</given-names></name> <name><surname>Jiang</surname> <given-names>J. N.</given-names></name> <name><surname>Liu</surname> <given-names>F. F.</given-names></name> <name><surname>Xu</surname> <given-names>X. D.</given-names></name></person-group> (<year>2020b</year>). <article-title>Effects of the musical sound environment on communicating emotion</article-title>. <source>Int. J. Environ. Res. Public Health</source> <volume>17</volume>:<fpage>2499</fpage>. <pub-id pub-id-type="doi">10.3390/ijerph17072499</pub-id><pub-id pub-id-type="pmid">32268523</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mohler</surname> <given-names>R.</given-names></name> <name><surname>Renom</surname> <given-names>A.</given-names></name> <name><surname>Renom</surname> <given-names>H.</given-names></name> <name><surname>Meyer</surname> <given-names>G.</given-names></name></person-group> (<year>2018</year>). <article-title>Personally tailored activities for improving psychosocial outcomes for people with dementia in long-term care</article-title>. <source>Cochrane Database Syst. Rev</source>. <volume>2</volume>:<fpage>CD009812</fpage>. <pub-id pub-id-type="doi">10.1002/14651858.CD009812.pub2</pub-id><pub-id pub-id-type="pmid">29438597</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nematchoua</surname> <given-names>M. K.</given-names></name> <name><surname>Ricciardi</surname> <given-names>P.</given-names></name> <name><surname>Orosa</surname> <given-names>J. A.</given-names></name> <name><surname>Asadi</surname> <given-names>S.</given-names></name> <name><surname>Choudhary</surname> <given-names>R.</given-names></name></person-group> (<year>2019</year>). <article-title>Influence of indoor environmental quality on the self-estimated performance of office workers in the tropical wet and hot climate of cameroon</article-title>. <source>J. Build. Eng.</source> <volume>21</volume>, <fpage>141</fpage>&#x02013;<lpage>148</lpage>. <pub-id pub-id-type="doi">10.1016/j.jobe.2018.10.007</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nishiura</surname> <given-names>Y.</given-names></name> <name><surname>Hoshiyama</surname> <given-names>M.</given-names></name> <name><surname>Konagaya</surname> <given-names>Y.</given-names></name></person-group> (<year>2018</year>). <article-title>Use of parametric speaker for older people with dementia in a residential care setting: a preliminary study of two cases</article-title>. <source>Hong Kong J. Occup. Ther</source>. <volume>31</volume>, <fpage>30</fpage>&#x02013;<lpage>35</lpage>. <pub-id pub-id-type="doi">10.1177/1569186118759611</pub-id><pub-id pub-id-type="pmid">30186084</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peixia</surname> <given-names>G.</given-names></name> <name><surname>Huijun</surname> <given-names>L.</given-names></name> <name><surname>Ni</surname> <given-names>D.</given-names></name> <name><surname>Dejun</surname> <given-names>G.</given-names></name></person-group> (<year>2010</year>). <article-title>An event-relates-potential study of emotional processing in adolescence</article-title>. <source>China Acta Psychol. Sini</source>. <volume>42</volume>, <fpage>342</fpage>&#x02013;<lpage>351</lpage>. <pub-id pub-id-type="doi">10.3724/SP.J.1041.2010.00342</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Petersen</surname> <given-names>S.</given-names></name> <name><surname>Knudsen</surname> <given-names>M. D.</given-names></name></person-group> (<year>2017</year>). <article-title>Method for including the economic value of indoor climate as design criterion in optimisation of office building design</article-title>. <source>Build. Environ</source>. <volume>122</volume>, <fpage>15</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1016/j.buildenv.2017.05.036</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Re</surname> <given-names>S.</given-names></name></person-group> (<year>2003</year>). <article-title>Facial expression in severe dementia</article-title>. <source>Z. Gerontol. Geriatr</source>. <volume>36</volume>, <fpage>447</fpage>&#x02013;<lpage>453</lpage>. <pub-id pub-id-type="doi">10.1007/s00391-003-0189-7</pub-id><pub-id pub-id-type="pmid">14685734</pub-id></citation></ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Riley-Doucet</surname> <given-names>C. K.</given-names></name> <name><surname>Dunn</surname> <given-names>K. S.</given-names></name></person-group> (<year>2013</year>). <article-title>Using multisensory technology to create a therapeutic environment for people with dementia in an adult day center a pilot study</article-title>. <source>Res. Gerontol. Nurs</source>. <volume>6</volume>, <fpage>225</fpage>&#x02013;<lpage>233</lpage>. <pub-id pub-id-type="doi">10.3928/19404921-20130801-01</pub-id><pub-id pub-id-type="pmid">23971533</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rolls</surname> <given-names>E. T.</given-names></name></person-group> (<year>2019</year>). <article-title>The orbitofrontal cortex and emotion in health and disease, including depression</article-title>. <source>Neuropsychologia</source>. <volume>128</volume>, <fpage>14</fpage>&#x02013;<lpage>43</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2017.09.021</pub-id><pub-id pub-id-type="pmid">28951164</pub-id></citation></ref>
<ref id="B50">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Satariano</surname> <given-names>W.</given-names></name></person-group> (<year>2006</year>). <source>Epidemiology of Aging: An Ecological Approach</source>. <publisher-loc>Sudbury, MA</publisher-loc>: <publisher-name>Jones and Bartlett Publishers</publisher-name>.</citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Satler</surname> <given-names>C.</given-names></name> <name><surname>Uribe</surname> <given-names>C.</given-names></name> <name><surname>Conde</surname> <given-names>C.</given-names></name> <name><surname>Da-Silva</surname> <given-names>S. L.</given-names></name> <name><surname>Tomaz</surname> <given-names>C.</given-names></name></person-group> (<year>2010</year>). <article-title>Emotion processing for arousal and neutral content in Alzheimer&#x00027;s disease</article-title>. <source>Int. J. Alzheimers Dis</source>. <volume>2009</volume>:<fpage>278615</fpage>. <pub-id pub-id-type="doi">10.4061/2009/278615</pub-id><pub-id pub-id-type="pmid">20721295</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shuping</surname> <given-names>X.</given-names></name> <name><surname>Chunmei</surname> <given-names>Z.</given-names></name> <name><surname>Shengying</surname> <given-names>P.</given-names></name> <name><surname>Feng</surname> <given-names>J.</given-names></name> <name><surname>Yanan</surname> <given-names>L.</given-names></name> <name><surname>Genchong</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Application of music therapy in elderly patients with impaired consciousness of cerebral infarction</article-title>. <source>China Guangdong Med</source>. <volume>40</volume>, <fpage>308</fpage>&#x02013;<lpage>310</lpage>. <pub-id pub-id-type="doi">10.13820/j.cnki.gdyx.20182273</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smith</surname> <given-names>M. C.</given-names></name></person-group> (<year>1995</year>). <article-title>Facial expression in mild dementia of the Alzheimer type</article-title>. <source>Behav. Neurol</source>. <volume>8</volume>, <fpage>149</fpage>&#x02013;<lpage>156</lpage>.</citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Staats</surname> <given-names>H.</given-names></name> <name><surname>Hartig</surname> <given-names>T.</given-names></name></person-group> (<year>2004</year>). <article-title>Alone or with a friend: a social context for psychological restoration and environmental preferences</article-title>. <source>J. Environ. Psychol</source>. <volume>24</volume>, <fpage>199</fpage>&#x02013;<lpage>211</lpage>. <pub-id pub-id-type="doi">10.1016/j.jenvp.2003.12.005</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Syed</surname> <given-names>M. S. S.</given-names></name> <name><surname>Syed</surname> <given-names>Z. S.</given-names></name> <name><surname>Pirogova</surname> <given-names>E.</given-names></name> <name><surname>Lech</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Static vs. dynamic modelling of acoustic speech features for detection of dementia</article-title>. <source>Int. J. Adv. Comput. Sci. Applic</source>. <volume>11</volume>, <fpage>662</fpage>&#x02013;<lpage>667</lpage>. <pub-id pub-id-type="doi">10.14569/IJACSA.2020.0111082</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Terzis</surname> <given-names>V.</given-names></name> <name><surname>Moridis</surname> <given-names>C. N.</given-names></name> <name><surname>Economides</surname> <given-names>A. A.</given-names></name></person-group> (<year>2010</year>) <article-title>Measuring instant emotions during a self-assessment test: the use of FaceReader</article-title>, in <source>Proceedings of the 7th International Conference on Methods Techniques in Behavioral Research</source> (<publisher-loc>Eindhoven; New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>18</fpage>&#x02013;<lpage>28</lpage>. <pub-id pub-id-type="doi">10.1145/1931344.1931362</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Viola</surname> <given-names>P.</given-names></name> <name><surname>Jones</surname> <given-names>M.</given-names></name></person-group> (<year>2001</year>). <article-title>Rapid object detection using a boosted cascade of simple features</article-title>, in <source>2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1</source>, eds <person-group person-group-type="editor"><name><surname>Jacobs</surname> <given-names>A.</given-names></name> <name><surname>Baldwin</surname> <given-names>T.</given-names></name></person-group> (<publisher-loc>Kauai, HI</publisher-loc>), <fpage>511</fpage>&#x02013;<lpage>518</lpage>.</citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wong</surname> <given-names>J. K. W.</given-names></name> <name><surname>Skitmore</surname> <given-names>M.</given-names></name> <name><surname>Buys</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>K.</given-names></name></person-group> (<year>2014</year>). <article-title>The effects of the indoor environment of residential care homes on dementia suffers in Hong Kong: a critical incident technique approach</article-title>. <source>Build. Environ</source>. <volume>73</volume>, <fpage>32</fpage>&#x02013;<lpage>39</lpage>. <pub-id pub-id-type="doi">10.1016/j.buildenv.2013.12.001</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xie</surname> <given-names>H.</given-names></name> <name><surname>Zhong</surname> <given-names>B. Z.</given-names></name> <name><surname>Liu</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>Sound environment quality in nursing units in Chinese nursing homes: a pilot study</article-title>. <source>Build. Acoust</source>. <volume>27</volume>, <fpage>283</fpage>&#x02013;<lpage>298</lpage>. <pub-id pub-id-type="doi">10.1177/1351010X20914237</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Hongding</surname> <given-names>L.</given-names></name></person-group> (<year>2015</year>). <article-title>Validity study on FaceReader&#x00027;s images recognition from Chinese facial expression database</article-title>. <source>Chin. J. Ergon</source>. <volume>21</volume>, <fpage>38</fpage>&#x02013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.13837/j.issn.1006-8309.2015.01.0008</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yanna</surname> <given-names>K.</given-names></name></person-group> (<year>2014</year>). <source>Research methods of emotion (Dissertation)</source>. <publisher-name>Xinyang Normal University</publisher-name>, <publisher-loc>Xinyang, China</publisher-loc>.</citation>
</ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yi</surname> <given-names>F. S.</given-names></name> <name><surname>Kang</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Effect of background and foreground music on satisfaction, behavior, and emotional responses in public spaces of shopping malls</article-title>. <source>Appl. Acoust</source>. <volume>145</volume>, <fpage>408</fpage>&#x02013;<lpage>419</lpage>. <pub-id pub-id-type="doi">10.1016/j.apacoust.2018.10.029</pub-id></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ying</surname> <given-names>W.</given-names></name> <name><surname>Jing</surname> <given-names>X.</given-names></name> <name><surname>Bingwei</surname> <given-names>Z.</given-names></name> <name><surname>Xia</surname> <given-names>F.</given-names></name></person-group> (<year>2008</year>). <article-title>Native assessment of international affective picture system among 116 Chinese aged</article-title>. <source>Chin. Ment. Health J</source>. <volume>22</volume>, <fpage>903</fpage>&#x02013;<lpage>907</lpage>. <pub-id pub-id-type="doi">10.3321/j.issn:1000-6729.2008.12.010</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zarbakhsh</surname> <given-names>P.</given-names></name> <name><surname>Demirel</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>Low-rank sparse coding and region of interest pooling for dynamic 3D facial expression recognition</article-title>. <source>Signal Image Video Process</source>. <volume>12</volume>, <fpage>1611</fpage>&#x02013;<lpage>1618</lpage>. <pub-id pub-id-type="doi">10.1007/s11760-018-1318-5</pub-id></citation>
</ref>
<ref id="B65">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhaolan</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <source>Emotional Psychology</source>. <publisher-loc>Beijing</publisher-loc>: <publisher-name>Peking University Press</publisher-name>.</citation>
</ref>
<ref id="B66">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhihui</surname> <given-names>H.</given-names></name></person-group> (<year>2015</year>). <source>The influence of sound source in waiting hall of high-speed railway station on emotion (Dissertation)</source>. <publisher-name>Harbin Institute of Technology</publisher-name>, <publisher-loc>Harbin, China</publisher-loc>.</citation>
</ref>
<ref id="B67">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhongzhe</surname> <given-names>L.</given-names></name></person-group> (<year>2016</year>). <source>Research on the voice preference and acoustic environment of the elderly in nursing homes (Dissertation)</source>. <publisher-name>Harbin Institute of Technology</publisher-name>, <publisher-loc>Harbin, China</publisher-loc></citation>
</ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname> <given-names>T. F.</given-names></name> <name><surname>Wu</surname> <given-names>Y.</given-names></name> <name><surname>Meng</surname> <given-names>Q.</given-names></name> <name><surname>Kang</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Influence of the acoustic environment in hospital wards on patient physiological and psychological indices</article-title>. <source>Front. Psychol</source>. <volume>11</volume>:<fpage>1600</fpage>. <pub-id pub-id-type="doi">10.3389/fpsyg.2020.01600</pub-id><pub-id pub-id-type="pmid">32848994</pub-id></citation></ref>
</ref-list> 
</back>
</article>