Skip to main content

ORIGINAL RESEARCH article

Front. Hum. Neurosci., 28 July 2021
Sec. Cognitive Neuroscience
Volume 15 - 2021 | https://doi.org/10.3389/fnhum.2021.701872

The Perceived Match Between Observed and Own Bodies, but Not Its Accuracy, Is Influenced by Movement Dynamics and Clothing Cues

Lize De Coster1* Pablo Sánchez-Herrero2 Jorge López-Moreno2,3 Ana Tajadura-Jiménez1*
  • 1DEI Interactive Systems Group, Department of Computer Science and Engineering, Universidad Carlos III de Madrid, Madrid, Spain
  • 2Seddi Labs, Madrid, Spain
  • 3Multimodal Simulation Lab, Department of Computer Science and Architecture, Computer Systems and Languages, Statistics and Operative Investigation, Universidad Rey Juan Carlos, Madrid, Spain

Own-perceived body matching – the ability to match one’s own body with an observed body – is a difficult task for both general and clinical populations. Thus far, however, own-perceived body matching has been investigated in situations that are incongruent with how we are used to experience and perceive our body in daily life. In the current study, we aimed to examine own-perceived body matching in a context that more closely resembles real life. More specifically, we investigated the effects of body movement dynamics and clothing cues on own-perceived body matching. We asked participants to match their own body with an externally perceived body that was a 3D-generated avatar based on participants’ real bodies, fitted with a computer-generated dress. This perceived body was (1) either static (non-walking avatar) or dynamic (walking avatar), (2) either bigger, smaller, or the same size as participants’ own body size, and (3) fitted with a dress with a size either bigger, smaller, or the same as participants’ own dress size. Our results suggest that movement dynamics cues did not improve the accuracy of own-perceived body matching, but that confidence about dress fit was higher for dynamic avatars, and that the difference between dynamic and static avatars was dependent on participants’ self-esteem. Furthermore, when participants were asked to rate the observed body in reference to how they wanted to represent themselves to others, dynamic avatars were rated lower than static avatars for the biggest-sized bodies only, possibly reflecting the influence of movement cues on amplifying socio-cultural stereotypes. Finally, while smaller body/dress sizes were systematically rated higher than bigger body/dress sizes for several self-report items, the interplay between body and dress size played an important role in participants’ self-report as well. Thus, while our research suggests that movement and garment dynamics, allowing for realistic, concrete situations that are reminiscent of daily life, influence own-body perception, these cues did not lead to an improvement in accuracy. These findings provide important insights for research exploring (own-) body perception and bodily self-awareness, with practical (e.g., development of online avatars) and clinical (e.g., anorexia nervosa and body dysmorphic disorder) implications.

Introduction

We experience and interact with the world through our body. In order to do so efficaciously and efficiently, humans need to be able to accurately and dynamically perceive their own body. Own-body perception has been extensively investigated using body illusions where the perception of one’s body deviates from the physical one (for a review see Kilteni et al., 2015). These include body distortion illusions, in which the size or posture of the body or its body parts are perceived as distorted (e.g., Goodwin et al., 1972; Ramachandran and Hirstein, 1998); out-of-body illusions, in which people perceive their self to be dislocated from their own body and/or people look at their body from a distance (e.g., Ehrsson, 2007; Lenggenhager et al., 2007); and body ownership illusions, in which non-bodily objects are perceived as a part of one’s own body (e.g., Botvinick and Cohen, 1998; Petkova and Ehrsson, 2008; Dummer et al., 2009; Peck et al., 2013; Maselli and Slater, 2014). These illusions demonstrate that the sense of body ownership, defined as the experience of one’s body and its body parts as one’s own, and necessary to move through the world and interact with others (Martin, 1995; Gallagher, 2000; Ehrsson, 2012; Gallagher and Daly, 2018), is a dynamic and malleable process that is determined by multisensory integration mechanisms (Ehrsson, 2012; Kilteni et al., 2015; Ehrsson and Chancel, 2019; Chancel and Ehrsson, 2020).

In addition to perceiving our own body from within through the integration of multisensory and sensorimotor inputs (Kilteni et al., 2015), own-body perception also takes places when confronted with the task of matching an externally perceived body with our own. This matching of our own body with an externally perceived body (own-perceived body matching) has been shown to be largely inaccurate, with people systematically over-estimating (Hashimoto and Iriki, 2013; Linkenauger et al., 2017; Sadibolova et al., 2019) or under-estimating (Valentina Tovée et al., 2003; Cazzato et al., 2016b; Ralph-Nearman et al., 2019) their body shape and size. These distortions in our body image have been measured both explicitly (Hashimoto and Iriki, 2013; Linkenauger et al., 2017; Pitron and de Vignemont, 2017; Sadibolova et al., 2019) and implicitly (Longo and Haggard, 2010, 2011; Maister et al., 2021). Importantly, they impact general well-being and have been linked to various clinical disorders (Stice and Shaw, 2002; Kaplan et al., 2013; Dakanalis et al., 2016). Furthermore, this inability to match own and perceived body has several practical implications, such as for the design of self-avatars for online gaming (Ducheneaut et al., 2009) and retail (Merle et al., 2012) experiences. The latter, for example, suffers from general dissatisfaction with purchased items and high return rates (Gallup, 1970; Petersen and Kumar, 2009; Saarijärvi et al., 2017), which have been partly attributed to a lack of resemblance between consumers and their online model/avatar (Kim and Forsythe, 2008). Nevertheless, despite its clinical and practical importance, this form of own-body perception, which involves matching an externally perceived body with one’s own, has remained difficult to improve.

In a study comparing healthy controls with individuals diagnosed with anorexia nervosa, researchers achieved this seemingly difficult task by generating personalized realistic avatars using a combination of 3D scanning and computer-generated imagery (CGI) techniques (Cornelissen et al., 2017). While an over-estimation of own body measurements was still observed in the group of individuals diagnosed with anorexia nervosa, the healthy control group showed accurate body size estimation. The authors suggested that their combined 3D-CGI method might be less prone to visual artifacts and may provide a clearer insight into the size and shape that someone considers him/herself to be. Additionally, they argue that contextualizing own-body evaluation in ecologically valid situations (e.g., looking in the mirror) is vital for future research in the field. While they suggest that the only way to truly achieve this is by allowing participants to inhabit a personalized 3D avatar in whom participants can manipulate body changes in real time, this method has rendered conflicting results (Piryankova et al., 2014; Preston and Ehrsson, 2014; Dakanalis et al., 2017) and poses practical challenges that are difficult to implement in daily life (e.g., the widespread availability of at-home technology to inhabit 3D avatars). Furthermore, during body perception/estimation experiments, own-perceived body matching is often performed in a way that is incongruent with how we are used to experiencing and observing our own body in daily life. First, while we are used to experience our own body in movement, movement dynamics have thus far not been included when investigating own-perceived body matching, although action and motor experience have been shown to be important in the development and maintenance of body ownership (e.g., Dummer et al., 2009; Nava et al., 2018). Second, the avatar/model bodies during own-perceived body matching are usually presented either without clothing (e.g., De Coster et al., 2020) or with static clothing that does not provide additional cues (e.g., wrapping of different sizes of clothing around the body, movement of clothing when body moves) for body size estimation (e.g., Cornelissen et al., 2017; Mölbert et al., 2018; Thaler et al., 2018; Sadibolova et al., 2019). While it has been shown that dynamics play an important role in the perception of clothing (Aliaga et al., 2015) and that observers are able to infer certain body properties (e.g., body stiffness) from clothing dynamics (Romero et al., 2020), as well as the clothing’s mechanical properties (Bi and Xiao, 2016), the question whether body size can be predicted by these dynamics and whether own-perceived body matching would be improved by these additional cues remain open questions. In sum, while movement and clothing dynamics likely play an important role in own-body perception in daily life, they have thus far not been investigated.

In the current study, we built upon the idea of emulating real-life practical situations when investigating own-body perception in the context of matching own with a perceived body. More specifically, the aim of this research was to systematically examine the influence of movement dynamics and clothing, two factors that are usually present when we perceive our own body in daily life, on own-perceived body matching. While it has been claimed that we do not have access to observing our body in motion (Kadambi and Lu, 2018), we argue that we rarely observe our own bodies and the accessories that come along with it in purely static positions (e.g., twisting and turning in front of a mirror). Furthermore, while the recognition of our body in motion depends on the integration of the combination of visual, somatosensory, proprioceptive, and motor information (Myers and Sowden, 2008), as well as auditory information (Tajadura-Jiménez et al., 2015), we believe that the contribution of visual motion cues alone may still be of relevance to this recognition process. In order to achieve this aim, we created several realistic 3D avatars of different sizes based on participants’ bodies using Skinned Multi-Person Linear modeling (SMPL; Loper et al., 2015). This parametric modeling method is thought to be more accurate and easier to use than other methods, partly because it avoids the intense manual effort inherent to commercial approaches (e.g., CGI). In accordance with a previous study using a similar method (De Coster et al., 2020) and previous research using other techniques (Valentina Monteath and McCabe, 1997; Tovée et al., 2003; Johnson et al., 2008; Fikkan and Rothblum, 2012; Cazzato et al., 2015; Spahlholz et al., 2016; Robinson and Kersbergen, 2017; Steinsbekk et al., 2017; Ralph-Nearman et al., 2019), we expected participants to not be able to accurately match their own with a perceived body. More specifically, we expected them to show a preference for smaller- compared to bigger-sized avatars, irrespective of their own body size. Importantly, we contextualized the task of matching own and perceived body in a real-life situation by (1) comparing the accuracy of matching participants’ own with a perceived avatar’s body that was either static or dynamic (walking avatar), (2) fitting the observed model/avatar with a computer-simulated dress in different sizes, and (3) specifically asking participants about their wish to use the perceived model/avatar for online shopping (De Coster et al., 2020). Concerning the effect of movement dynamics, we hypothesized that the addition of dynamic movement cues would increase participants’ ability to accurately determine their own body size/shape given the additional information that these movement cues provide and the resemblance to our everyday real-life environment. This comparison of static vs. dynamic avatars was our main effect of interest, since we expected these findings to render important insights into the role of action cues in own-body perception and bodily self-awareness, with both clinical and practical implications. To further examine these implications, we investigated whether this effect of movement dynamics was modulated by bodily self-esteem and personality differences given that previous research has shown that both self-esteem (e.g., Maister et al., 2021) and personality variables (e.g., De Coster et al., 2020) influence body size estimation. Both healthy (Cornelissen et al., 2013) and clinical (Gardner and Brown, 2014) populations with negative attitudes toward their own body weight, as well as healthy populations scoring higher on neuroticism (Hartmann and Siegrist, 2015), have been shown to overestimate their own body size. We consequently expected that the addition of dynamic cues – which we hypothesized would lead to more accurate body size estimation – might have a different effect (e.g., due to differences in the processing of bodily information; Irvine et al., 2019) for participants with certain personality traits (e.g., neuroticism) and participants scoring low on bodily self-esteem measures, compared to other participants. Finally, we added a dress simulation in different sizes to ensure that the perception of the avatar’s body was congruent with how we generally observe our bodies in everyday situations where we mostly perceive ourselves with, rather than without, clothes (note that this dress simulation was also influenced by the body’s movement dynamics). Thus, we expected that a correct dress size would improve the detection of participants’ own body size.

Materials and Methods

Participants

Sample size was dictated by a Bayesian approach using JASP (JASP Team, 2020). Participants were recruited from a subject pool of participants who participated in previous experiments, with the following inclusion criteria: (1) given that the experimental stimuli had to be based on videos of participants’ actual bodies (see below), participants were only eligible if such videos were available since the COVID-19 pandemic and the videos’ specific requirements (e.g., correct distance between the participant and the camera, no background items present, specific clothing for the participant to wear) made it impossible for us or for the participants themselves to record new videos, (2) in order to be able to model both a dress size below and above participants’ real dress size, only participants with a self-reported dress size of 38 or 40 (EU sizes) were eligible (only EU dress sizes 36, 38, 40, and 42 were available to be modeled), (3) to exclude gender effects (He et al., 2020), all participants had to be female. Considering these criteria, the size of our initial available subject pool was 20. We scheduled to test 15 participants, and planned to check the Bayes Factor (BF; prior based on a Cauchy distribution, default scale of 0.707, zero-centered) after data collection for this group was completed. If a stopping criterion had not been reached, we planned to repeat this procedure for the additional five participants, and expand the subject pool if necessary. The stopping criteria included: (1) the BF reached the threshold for moderate evidence to either support (BF10 < 1/3) or reject (BF10 > 3) the null hypothesis for the effect of dynamic vs. static avatars (our main effect of interest) for all self-report items, (2) the pre-specified end date (30/06/2020) had been reached. The experiment was terminated due to reaching the first criterion, and data collection was halted at 15 participants.

Fifteen adults (age in years: range = 18–28, M = 21.60, SD = 2.65; 11 participants with dress size 38, four participants with dress size 40), all female and residing in Spain, participated in the study in exchange for a gift card of 10 euros. Body mass index (BMI) in our sample ranged between 19.3 and 24.1 (M = 21.49, SD = 1.55), which lies within the healthy range (18.5–24.9) as defined by the World Health Organization. One participant scored more than two standard deviations below the sample average on all subscales of the bodily self-esteem questionnaire (see below; see Table 1 for questionnaire data). Removing this participant from the analyses did not change the results. The study was conducted in accordance with the 1964 Declaration of Helsinki and was granted ethical approval by the local ethics committee at Universidad Carlos III de Madrid. All participants provided informed written consent beforehand.

TABLE 1
www.frontiersin.org

Table 1. Mean (M) and standard deviations (SD) for the subscales of the Body Esteem Scale for Adolescents and Adults (BESAA; rated on a scale from 0 to 4) and the Big 5 Inventory-10 (BFI-10; rated on a scale from 1 to 5).

Stimuli and Apparatus

Figure 1 shows an example frame of the experimental stimuli within the experimental procedure. Example videos (all Body/Dress size combinations are represented in one video for the dynamic and static condition separately) can be found in the Supplementary Material.

FIGURE 1
www.frontiersin.org

Figure 1. Schematic overview of the experimental procedure. During dynamic trials, a 4 s video of a walking avatar was shown in a loop four times for 16 s. During static trials, two frames were selected out of these 4 s videos, and were then also looped for 16 s. Participants were shown both the front and back view of the avatars in both types of trials.

After obtaining a 360° full-body capture of participants, existing software COLMAP (Schönberger and Frahm, 2016; Schönberger et al., 2016) and custom-made scripts were used to create an avatar representing participants’ real bodies. This avatar was represented using SMPL (Loper et al., 2015) which includes several parameters to modify the avatar mesh. For each participant, different avatars were created by increasing or decreasing the second shape parameter, which primarily reflects changes in waist circumvention (although the avatar’s full body changed proportionally with respect to participants’ original body size, i.e., Body size 0). This resulted in three different avatars per participant: an avatar with a body size smaller than participants’ original body size (Body size −1; approximately 4 cm waist reduction), an avatar representing participants’ original body size (Body size 0), and an avatar with a body size bigger than participants’ original body size (Body size +1; approximately 4 cm waist increase; for full details on the avatar creation process see De Coster et al., 2020).

Additionally, a digital dress was created after extracting the patterns and creating 3D meshes from a real dress that was bought in different sizes (36, 38, 40, and 42). The patterns and initial resting position of the virtual dress were created with CLO3D1. Before extracting the dress’ 3D mesh, the dress was partially inflated to separate it from the skin of the avatar mesh, to ensure that there were no initial collisions in the simulation. Similar to the body size manipulation, different dress sizes were created: a dress size that was a size smaller than participants’ original dress size (Dress size −1), a dress size that reflected participants’ original dress size (Dress size 0), and a dress size that was a size bigger than participants’ original dress size (Dress size +1). This resulted in nine body/dress size combinations that were randomized per participant.

In order to allow for dynamic stimuli that represented real-life body/dress behavior during action movement, a walking animation was simulated for all avatars (Varol et al., 2017). The dress simulation was added using the simulation engine ARCSim, which allows for fine details and the preservation of fine-scale dynamic behavior (Narain et al., 2012, 2013; Pfaff et al., 2014). The application of one of the default materials (Wang et al., 2011) resulted in a sequence of meshes that represented the dress in different states of the avatar animation. Subsequently, videos of front and back views of the walking avatars with the dress simulation were rendered using Maya (Autodesk, 2019), and combined into one 4-s video in MATLAB (front view of the avatar on the left side, back view of the avatar on the right side).

Finally, two different video types (1,280 × 720 pixels) were created that were used as experimental stimuli. For the dynamic stimuli, videos of the walking avatars were looped four times to allow for sufficient time to inspect both the front and back view of the avatars (16 s; this duration was selected based on a pilot where several durations were tested). For the static stimuli, two frames (one frame where the avatar has the left foot in front, and another frame where the avatar has the right foot in front) were selected out of the original 4-s videos using Matlab. These frames were combined into a 4-s video in which each frame was shown for 2 s, and looped four times such that the total duration of these static stimuli was equal to that of the dynamic stimuli (16 s).

Self-Report Measures

As described above, an experimental trial consisted of participants observing one of the stimuli for 16 s. At the end of each trial, participants were presented with nine self-report items that had to be rated on a continuous scale from −100 to +100. These items were adapted from previous research (Jin, 2010; Latoschik et al., 2017; De Coster et al., 2020), and measured participants’ own body perception in terms of perceived match between the observed avatar’s body and their own, as well as participants’ preferences toward the observed avatar across different dimensions (see Table 2 for a description of the items). The items were always presented in the same order: “Dress,” “Dress confidence,” “Measurements,” “Measurements confidence,” “Body,” “Myself,” “Others,” “Attractiveness,” and “Rebrowse.” Explicit certainty judgments (i.e., items related to confidence) for the “Dress” and “Measurements” items were added given that research has shown that the reliability of perception across different decisions might be related to subjective rather than objective accuracy (Fairhurst et al., 2018).

TABLE 2
www.frontiersin.org

Table 2. Description of the self-report items, in the order that they were administered at the end of each trial.

Body Esteem and Personality Questionnaires

Body Esteem Scale for Adolescents and Adults

The Body Esteem Scale for Adolescents and Adults (BESAA) is a 23-item questionnaire that measures people’s affective attitudes toward their own bodies (Mendelson et al., 2010). The questionnaire is comprised of three subscales that address general feelings about one’s appearance (Appearance), evaluations attributed to others about one’s body appearance (Attribution), and satisfaction with one’s body weight (Weight). The questionnaire items are rated on a Likert scale from 0 to 4, with higher scores reflecting more positive attitudes. Cronbach’s α in the current study was 0.89 (Appearance), 0.74 (Attribution), and 0.95 (Weight).

Big 5 Inventory-10

The Big 5 Inventory-10 (BFI-10) is a 10-item version of the Big 5 Personality Test (Benet-Martínez and John, 1998) that measures personality traits. Items are rated on a Likert scale from 1 to 5, and they correspond to five subscales: Extraversion (Cronbach’s α 0.65), Agreeableness (Cronbach’s α 0.71), Conscientiousness (Cronbach’s α 0.67), Neuroticism (Cronbach’s α 0.54), and Openness (Cronbach’s α 0.88; Benet-Martínez and John, 1998; Rammstedt and John, 2007).

Procedure

Gorilla Experiment Builder (Anwyl-Irvine et al., 2020) was used to create and host the experiment online. Participants were instructed to complete the experiment individually and in one setting (verified afterward by the experimenter by checking completion dates/times). Participants were then told that they would observe avatars of different sizes (based on their own body) wearing a dress, and that they would have to answer several questions about the avatars they observed. The experiment consisted of 72 randomized trials (four times nine static and nine dynamic videos of a combination of three different body and dress sizes). On each trial, a fixation cross was presented for 250 ms, and after a 100 ms blank screen, the avatar video was shown for 16,000 ms. Immediately after the end of this video, participants responded to the nine self-report items at their own pace (see Figure 1). After completion of the self-report items and an inter-trial interval of 100 ms, the next trial started. At the end of the experiment, participants filled in the body esteem and personality questionnaires and were instructed to contact the experimenter to receive their monetary compensation. The experiment had a maximum total duration of 30 min.

Design and Data Analysis

Normality checks were performed with Shapiro-Wilks tests (all ps > 0.237). A 2 × 3 × 3 repeated-measures design was used for each self-report item separately, with three within-subject factors: Animation (Static vs. Dynamic), Body size (Body −1 vs. Body 0 vs. Body +1), and Dress size (Dress −1 vs. Dress 0 vs. Dress +1). Follow-up paired samples t-tests and correlations between effects of interest and questionnaire data were corrected for multiple comparisons using false discovery rate (fdr) correction. Data were analyzed using a frequentist approach in R (R Core Team, 2020) as well as a Bayesian approach in JASP (JASP Team, 2020). The latter approach was used to test (1) whether there was moderate to strong evidence to reject the null hypotheses under a Bayesian framework in case of a significant effect and (2) whether potential null results could be considered support for the absence of any effects. For the Bayesian analysis, we obtained BF10 – representing the observation of the data under the alternative hypothesis compared to the null hypothesis (Wagenmakers et al., 2018) – for each main and interaction effect. We employed a threshold of moderate evidence to support (BF10 < 1/3) or reject (BF10 > 3) the null hypothesis.

Results

Main Effects

The main effects of Animation, Body size, and Dress size are summarized in Figure 2. For Animation, a significant effect was observed for the “Dress confidence” item only [F(1,13) = 6.33, p = 0.026, ηp2 = 0.33, BF10 = 3.137], indicating more confidence about dress fit for dynamic (M = 56.90, SD = 5.96) compared to static (M = 50.90, SD = 5.98) avatars (see Figure 2A). None of the other self-report items showed an effect of Animation (all ps > 0.253, all BF10 < 0.223).

FIGURE 2
www.frontiersin.org

Figure 2. Main effects of (A) Animation, (B) Body size, and (C) Dress size. Dress = How likely do you think it is that this dress fits you?, Dress confidence = How certain are you?, Body = I feel as if the body of the avatar is my own body, Myself = The avatar reflects how I consider myself to be, Others = I consider the avatar to reflect how I want to present myself to others, Attractiveness = How attractive do you find the woman represented by this avatar?, Rebrowse = How likely do you think it is that you would choose this avatar as your avatar for online shopping? Body/Dress –1 = One body/dress size smaller than participants’ real body/dress size, Body/Dress 0 = Participants’ real body/dress size, Body/Dress +1 = One body/dress size bigger than participants’ real body/dress size.

For Body size, a significant effect was observed for the items “Dress” [F(2,12) = 5.86, p = 0.017, ηp2 = 0.49, BF10 = 2.429e+7], “Others” [F(2,12) = 23.51, p < 0.001, ηp2 = 0.80, BF10 = 3.090e+28], “Attractiveness” [F(2,12) = 17.87, p < 0.001, ηp2 = 0.75, BF10 = 8.481e+8], and “Rebrowse” [F(2,12) = 4.09, p = 0.044, ηp2 = 0.41, BF10 = 1.378e+9]. Figure 2B shows that a negative linear relationship was consistently observed for these items across the three body sizes.

Finally, for Dress size, a significant effect was found for the items “Body” [F(2,12) = 5.84, p = 0.017, ηp2 = 0.49, BF10 = 0.530] and “Myself” [F(2,12) = 6.12, p = 0.015, ηp2 = 0.50, BF10 = 0.511]. Similar to the effects of Body size, bigger dress sizes were rated lower than smaller dress sizes (see Figure 2C).

For significant pairwise comparisons that survived fdr-correction of the effects of Body and Dress size, see Table 3. Note that while the significant effects for Animation and especially Body size all reached the threshold of moderate evidence to reject the null hypothesis (set in the Bayesian analysis), this was not the case for the significant effects concerning Dress size.

TABLE 3
www.frontiersin.org

Table 3. Pairwise comparisons of the main and interaction effects of Animation, Body size, and Dress size.

Interaction Effects

An interaction between Animation and Body size was found for the “Others” item [F(2,12) = 4.26, p = 0.040, ηp2 = 0.42, BF10 = 7.829e+26]. The difference between static and dynamic avatars was only significant for Body +1 (t(14) = 2.91, p = 0.033, d = 0.22; see Figure 3A), with dynamic avatars (M = −68.36, SD = 31.38) rated lower than static avatars (M = −61.20, SD = 32.52).

FIGURE 3
www.frontiersin.org

Figure 3. Interaction effects of (A) Animation and Body size for the Others item, (B) Body size and Dress size for the Dress confidence and Measurements confidence items, and (C) Body size and Dress size for the Dress, Body, and Myself items. Dress = How likely do you think it is that this dress fits you?, Dress confidence = How certain are you?, Measurements confidence = How certain are you? (In response to How likely do you think it is that this avatar’s measurements correspond to your own?), Body = I feel as if the body of the avatar is my own body, Myself = The avatar reflects how I consider myself to be, Others = I consider the avatar to reflect how I want to present myself to others. Body/Dress –1 = One body/dress size smaller than participants’ real body/dress size, Body/Dress 0 = Participants’ real body/dress size, Body/Dress +1 = One body/dress size bigger than participants’ real body/dress size.

Furthermore, a two-way interaction between Body and Dress size was observed for the items “Dress” [F(4,10) = 3.87, p = 0.038, ηp2 = 0.61, BF10 = 35373.918], “Dress confidence” [F(4,10) = 3.68, p = 0.043, ηp2 = 0.60, BF10 = 0.074], “Measurements confidence” [F(4,10) = 4.87, p = 0.019, ηp2 = 0.66, BF10 = 0.005], “Body” [F(4,10) = 4.25, p = 0.029, ηp2 = 0.63, BF10 = 1128.508], and “Myself” [F(4,10) = 4.90, p = 0.019, ηp2 = 0.66, BF10 = 506.754]. Figure 3B and Table 3 suggest that for the “Dress confidence” and “Measurements confidence” items, Dress −1 was rated significantly higher than Dress +1 for Body +1 only, suggesting that participants were more confident about their answers when presented with the biggest body size. Additionally, for the “Dress,” “Body,” and “Myself” items, the difference between Body +1 and the other body sizes was stronger for Dress −1 and Dress 0 compared to Dress +1 (see Figure 3C and Table 3; note that for the “Body” item, however, none of the comparisons survived correction).

Note that all significant interactions reached the threshold of moderate evidence to reject the null hypothesis, except for the items related to confidence of dress and measurements fit when looking at the interaction between Body and Dress size. No interactions between Animation and Dress size or three-way interactions were observed.

Correlation Analyses With Body Esteem and Personality Questionnaires

In order to reduce the number of tests, we restricted our correlation analyses with the body esteem and personality questionnaires to the main effect of Animation (Dynamic–Static) for all items, given that this was our main effect of interest. For the “Dress confidence” item, a significant negative relationship was observed for the Appearance (r = −0.58, p = 0.045) and Attribution (r = −0.66, p = 0.033) subscales of the BESAA, suggesting that the ratings difference between dynamic and static avatars for confidence about dress fit was bigger for participants with more negative feelings (see Figure 4A) and evaluations attributed to others concerning their own body appearance (and vice versa; see Figure 4B). A negative correlation was also found between the Attribution subscale of the BESAA and the “Measurements confidence” item (r = −0.64, p = 0.042), indicating that the same negative relationship existed when participants were asked to rate confidence about measurements correspondence (see Figure 4B). There were no other significant correlations for the effect of Animation (all ps > 0.06).

FIGURE 4
www.frontiersin.org

Figure 4. Correlations between the main effect of Animation and the (A) Appearance (general feelings about one’s appearance) and (B) Attribution (evaluations attributed to others about one’s body appearance) subscales of the Body Esteem Scale for Adolescents and Adults (BESAA). Higher scores on the BESAA reflect more positive attitudes. Dress = How likely do you think it is that this dress fits you?, Dress confidence = How certain are you?, Measurements confidence = How certain are you? (In response to How likely do you think it is that this avatar’s measurements correspond to your own?).

Discussion

In the current study, we investigated own-body perception in a real-life practical setting by asking participants to match their own body with an externally perceived body that was a 3D-generated avatar based on participants’ real bodies, fitted with a computer-generated dress. This perceived body was (1) either static or dynamic, (2) either bigger, smaller, or the same size as participants’ own body size, and (3) fitted with a dress with a size either bigger, smaller, or the same as participants’ own dress size. Although we expected the addition of action cues (i.e., a walking avatar) to improve the ability to match own and an avatar’s body size (i.e., own body perception ratings), we only observed an effect of moving vs. non-moving avatars when participants had to indicate their confidence in their answer about whether the dress they had just seen would fit them (irrespective of the accuracy of their answer to the item on dress fit). Importantly, however, this observed difference between static and dynamic avatars was dependent on participants’ bodily self-esteem: participants with more negative feelings toward their own body felt more confident when confronted with dynamic avatars than participants with less negative feelings. Furthermore, when asked to rate how well the avatar reflected how participants wanted to represent themselves to others, we observed that dynamic avatars were rated lower than static avatars for the biggest-sized bodies only. For several self-report items, participants systematically rated smaller body/dress sizes higher than bigger body/dress sizes. When asked about confidence about dress and measurements fit, however, the higher ratings for smaller dress sizes were only present for the biggest body size. Finally, when participants had to rate dress fit, how strongly they felt that the avatar’s body was their own, and how the avatar represented how they considered themselves to be, the difference between the biggest body size and the other body sizes was strongest for the smallest dress sizes. We discuss these observed effects and potential limitations in more detail in the following sections.

Effects of Animation

The role of the motor system in shaping and maintaining the bodily self and body ownership in particular has been well-documented by neuroimaging studies showing the emergence of premotor cortex activity lying at the root of our body schema (Graziano et al., 1994; Fogassi et al., 1996; Ehrsson et al., 2004, 2005; Convento et al., 2018), as well as body distortion illusions in healthy (Dummer et al., 2009; Vallar and Ronchi, 2009; Garbarini et al., 2013; Bolognini et al., 2014; Hara et al., 2015; della Gatta et al., 2016) and patient (Burin et al., 2015; Nava et al., 2017) populations. Thus, it seems that the sensory and motor system dynamically interact to develop our bodily self-awareness and self-consciousness (Nava et al., 2018). Interestingly, however, the influence of dynamic action cues on own-body perception when confronted with the task of matching own with an externally perceived body has thus far received little attention.

Our study, which compared dynamic and static avatars by adding walking animations (Varol et al., 2017), indicated that dynamic avatars were only rated higher than static avatars when participants had to rate the confidence in their answer concerning dress fit, suggesting that dynamic avatars increased participants’ certainty about dress fit irrespective of the accuracy of their answer to this item. Furthermore, this difference in ratings between walking and non-walking avatars was bigger for participants with low bodily self-esteem (in terms of confidence about both dress and measurement fit). The question that arises is what prompted participants with negative feelings toward their own body to feel more confident when confronted with dynamic avatars. It has been shown that people who tend to overestimate their own body measurements show disturbed fixation patterns when observing different bodies (Irvine et al., 2019), largely focusing on uninformative areas (Cornelissen et al., 2016). Our results indicate that people with low bodily self-esteem (commonly associated with over-estimation of own body size, see e.g., Ahadzadeh et al., 2018) might also focus their attention differently when dynamic action cues are added to observed avatars, possibly needing or caring more about the added value of these cues. Future research is warranted, however, to explore fixation patterns in own-body perception of dynamic bodies, and the influence of individual personality differences. Finally, when participants were asked to rate whether the avatar they were presented with reflected how they wanted to present themselves to others, dynamic avatars were rated lower than static avatars when they observed avatars with bigger-sized bodies. Thus, it seems that action cues lead to a lower preference of bigger-sized moving avatars when participants had to consider their bodies in a social context, possibly suggesting that movement dynamics cues are especially informative for bigger-sized bodies and consequently exacerbate the socio-cultural weight stigma (Fikkan and Rothblum, 2012; Spahlholz et al., 2016). While it has been shown that body image is partly a social construct (Davison, 2012), further research is needed to investigate the role of action cues in own-body perception, particularly when considering its social implications.

There are several reasons why our Animation manipulation might not have improved own-perceived body matching to the degree that we expected it to. First, it is possible that our static condition introduced implied motion. Previous research has shown that the observation of bodily actions employs visual (Grossman and Blake, 2002; Kable and Chatterjee, 2006) and motor areas (Rizzolatti and Craighero, 2004), even when motion is merely implied by static human postures (Urgesi et al., 2006, 2007; Candidi et al., 2008). Furthermore, it has been observed that body size and implied motion interact in influencing aesthetic appreciation of human bodies (Cazzato et al., 2012, Cazzato et al., 2016a), such that implied motion increases the aesthetic preference for thinner bodies (Cazzato et al., 2012). Thus, while the static condition in the current experiment did not offer the same action cues as the dynamic condition, the use of static human postures (representing dynamic movements) likely introduced implied motion of the observed bodies and dresses. Future research should address this important confound, and explore the contribution of dynamic cues when they are contrasted to a purely static condition. Second, an implicit measure of own-body recognition might have been more appropriate than our explicit self-report measure to access bodily representations that use motor/dynamic information. It has been shown that explicit and implicit recognition of our own body depend on different cortical mechanisms (Candini et al., 2016), and that only the former is based on motor information (Ferri et al., 2011). Thus, the explicit task in the current experiment might have only minimally relied on the dynamic cues provided by the Animation manipulation. Follow-up research using more implicit measures of own-body recognition is necessary to shed more light on this issue. Finally, we opted to manipulate movement dynamics by adding walking movements, rather than movements that people typically perform in front of a mirror (e.g., twisting and turning), because we believed they would be more informative and because they offer a viewpoint that we normally don’t (but probably would like to) have access to. However, the choice for these walking movements made the movement dynamics cues less compatible with real-life experiences, which may have affected our findings.

Effects of Body Size

In line with previous research (Longo and Haggard, 2012; Hashimoto and Iriki, 2013; Kaplan et al., 2013; Linkenauger et al., 2017; Sadibolova et al., 2019; Maister et al., 2021), we observed that participants were unable to accurately identify their own body measurements. Furthermore, we replicated results from a previous study (De Coster et al., 2020), showing that participants – irrespective of their own body size – rate smaller-sized bodies higher (i.e., more attractive, more as a body that represents how you want to present yourself to others and that you would use for online shopping) than bigger-sized bodies, even when this own-perceived body matching takes place in a concrete context with practical implications. These findings, obtained using technology that was able to generate highly realistic avatar bodies (Loper et al., 2015), are in line with the body weight stigma that is especially pervasive in women (Fikkan and Rothblum, 2012; Spahlholz et al., 2016), and with research indicating that people tend to underestimate their body size (e.g., Monteath and McCabe, 1997; Tovée et al., 2003; Cazzato et al., 2015; Robinson and Kersbergen, 2017; Steinsbekk et al., 2017; Ralph-Nearman et al., 2019).

Effects of Dress Size

Importantly, we fitted the different avatar bodies with different sizes of a highly realistic computer-generated dress (Narain et al., 2013; Pfaff et al., 2014) to further increase the experiment’s ecological validity and realism. Similar to the effect of body size, our results indicated that participants rated the smallest dress sizes higher than the bigger ones. This difference was only present for the biggest-sized bodies when participants had to rate confidence in their answers concerning dress and measurement fit, however, seemingly suggesting that the biggest body size made it easier for participants to discern the difference between the smallest and the biggest dress sizes. The same was true for the difference between the biggest and smallest body sizes, which was strongest for the smallest dress size for the “Dress” (How likely do you think it is that this dress fits you?), “Body” (I feel as if the body of the avatar is my own), and “Myself” (The avatar reflects how I consider myself to be) items. Together, these results indicate that own-body perception relies on a combination of an avatar’s body and clothing information when participants are presented with realistic avatars and garments. Thus, this suggests that garment fit and movement might provide important relevant cues for body size estimation. Importantly, however, the addition of these realistic, ecologically valid cues did not improve own-body perception in terms of the ability to match an externally perceived body with one’s own (contrary to Cornelissen et al., 2017), since participants remained unable to identify their own body and dress size accurately. It has to be noted, though, that both the main effect of dress size and its interaction with body size for the confidence items did not meet the threshold to reject the null hypothesis based on moderate evidence set during our Bayesian analysis, which suggests that these effects should be interpreted with caution and warrant further exploration.

Limitations and Implications

The study has several important limitations. First, although BMI measures in the current sample were inside the “normal” or “healthy” range, we did not include any measures of pathological and/or negative body image, nor were participants excluded based on current or previous history of eating or body dysmorphic disorders. The influence of these disorders should be addressed in further research, since it has been shown that they greatly impact body size estimation (Tovée et al., 2003; Cornelissen et al., 2017). Second, the sample size in our study (15 participants) was relatively low. Due to several restrictions imposed by the COVID-19 pandemic at the time of the study, the available subject pool was limited (e.g., 360° videos of participants’ bodies had to be at our disposal). However, a Bayesian power analysis indicated that our sample was sufficiently large to answer our main research questions. Finally, it is important to note that we were unable to assess order effects related to the self-report items in the current study, given that the items were always presented in the same order (note that this does not apply to the order of the experimental conditions, which was randomized). Although this was done deliberately to make the task easier for participants, follow-up research should explore the possibility of order effects for the self-report items. Furthermore, the “Attractiveness” item (“How attractive do you find the woman represented by this avatar?”) could have been confusing to participants, given that they were informed that they would observe avatars based on their own body (but of different sizes). While this might have induced participants to self-evaluate their own perceived attractiveness, the observation that smaller-sized bodies were rated as more attractive than bigger-sized bodies seems to suggest that our manipulation was (at least partly) successful.

The influence of eating and/or body dysmorphic disorders on body size estimation is a topic of extensive research. Research suggests, for example, that body size overestimation is a defining feature of anorexia nervosa (Hennighausen et al., 1999; Gardner and Brown, 2014; Dakanalis et al., 2016; Gadsby, 2017; Malighetti et al., 2020; but see Cornelissen et al., 2013 who showed that body size overestimation in women with anorexia nervosa is not qualitatively different from the overestimation observed in women without anorexia nervosa), and that this overestimation is robust to manipulations that improve the accuracy of body size perception in healthy controls. While we expect that the addition of action cues might lead to stronger effects in clinical populations, in part suggested by the observation in the current study that participants with low bodily self-esteem showed an increased advantage of dynamic avatars, and based on previous studies that suggest that people with anorexia nervosa have a heightened sensitivity to visual bodily cues (Eshkevari et al., 2012; Keizer et al., 2014; Crucianelli et al., 2019; see Martinaud et al., 2017 for similar results in neurological patients), it is unclear which direction this influence would take (increased vs. decreased accuracy), especially given the fact that our Animation manipulation did not alter the accuracy of own-perceived body matching. However, as described above, future research should include screening for clinical disorders as well as more implicit measures in order to address the clinical implications of our findings better. Furthermore, the use of implicit tasks might also provide more information concerning the practical implications of the current research. Avatar design and development for online retail experiences, for example, depend on maximizing the congruency between the observed avatar and the self for better outcomes (e.g., greater purchase intentions, lower return rates; Kim and Forsythe, 2008). While dynamic cues did not increase accuracy of matching own with a perceived avatar’s body, research suggests that only implicit measures might be susceptible to such cues (Ferri et al., 2011).

Conclusion

In sum, the current study aimed at contextualizing own-body perception in a real-life, practical situation by uniquely combining different technologies to create realistic, walking, dress-fitted avatars. None of these factors, however, seemed to improve own-perceived body matching, indicating that participants’ own body representations largely remain inaccurate (Hashimoto and Iriki, 2013; Linkenauger et al., 2017; Pitron and de Vignemont, 2017; Sadibolova et al., 2019; Maister et al., 2021) even in a realistic, concrete situation that has practical implications. These findings provide important insights for research exploring the development of online avatars (Kim and Forsythe, 2008) and research investigating own-body perception in clinical disorders such as anorexia nervosa and body dysmorphic disorders (e.g., Tovée et al., 2003; Cornelissen et al., 2017).

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.

Ethics Statement

The studies involving human participants were reviewed and approved by the local ethics committee at the Universidad Carlos III de Madrid. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author Contributions

All authors designed the research and reviewed the manuscript. LDC and PS-H developed the experimental stimuli. LDC implemented the experimental procedure, carried out the experiments, performed the analyses, and wrote the first draft of the manuscript.

Funding

LDC was supported by the CONEX-Plus programme funded by the Universidad Carlos III de Madrid and the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie Grant Agreement No. 801538 and the Ministerio de Ciencia, Innovación y Universidades Juan de la Cierva-Incorporación Grant IJC2018-038347-I. AT-J was supported by the Ministerio de Economía, Industria y Competitividad of Spain Ramón y Cajal Grant RYC-2014-15421. This research was partly funded by the Spanish Agencia Estatal de Investigación (PID2019-105579RB-I00/AEI/10.13039/501100011033).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

The authors would like to thank Luis Javier Romero Ces for his help in processing the videos and Patricia Rodríguez for her help in creating the patterns and 3D meshes of the garments.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnhum.2021.701872/full#supplementary-material

Footnotes

  1. ^ https://www.clo3d.com/

References

Ahadzadeh, A. S., Rafik-Galea, S., Alavi, M., and Amini, M. (2018). Relationship between body mass index, body image, and fear of negative evaluation: moderating role of self-esteem. Health Psychol. Open 5:2055102918774251. doi: 10.1177/2055102918774251

PubMed Abstract | CrossRef Full Text | Google Scholar

Aliaga, C., O’Sullivan, C., Gutierrez, D., and Tamstorf, R. (2015). “Sackcloth or silk? The impact of appearance vs dynamics on the perception of animated cloth,” in Proceedings of the ACM SIGGRAPH Symposium on Applied Perception, (New York NY), 41–46. doi: 10.1145/2804408.2804412

CrossRef Full Text | Google Scholar

Anwyl-Irvine, A. L., Massonnié, J., Flitton, A., Kirkham, N., and Evershed, J. K. (2020). Gorilla in our midst: an online behavioral experiment builder. Behav. Res. Methods 52, 388–407. doi: 10.3758/s13428-019-01237-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Autodesk, I. (2019). Maya. San Rafael CA: Autodesk.

Google Scholar

Benet-Martínez, V., and John, O. P. (1998). Los Cinco Grandes across cultures and ethnic groups: multitrait-multimethod analyses of the Big Five in Spanish and English. J. Pers. Soc. Psychol. 75, 729–750. doi: 10.1037/0022-3514.75.3.729

PubMed Abstract | CrossRef Full Text | Google Scholar

Bi, W., and Xiao, B. (2016). “Perceptual constancy of mechanical properties of cloth under variation of external forces,” in Proceedings of the ACM Symposium on Applied Perception, (New York NY), 19–23. doi: 10.1145/2931002.2931016

CrossRef Full Text | Google Scholar

Bolognini, N., Ronchi, R., Casati, C., Fortis, P., and Vallar, G. (2014). Multisensory remission of somatoparaphrenic delusion. Neurol. Clin. Pract. 4, 216–225. doi: 10.1212/CPJ.0000000000000033

PubMed Abstract | CrossRef Full Text | Google Scholar

Botvinick, M., and Cohen, J. (1998). Rubber hands ‘feel’ touch that eyes see. Nature 391:756. doi: 10.1038/35784

PubMed Abstract | CrossRef Full Text | Google Scholar

Burin, D., Livelli, A., Garbarini, F., Fossataro, C., Folegatti, A., Gindri, P., et al. (2015). Are movements necessary for the sense of body ownership? Evidence from the rubber hand illusion in pure hemiplegic patients. PLoS One 10:e0117155. doi: 10.1371/journal.pone.0117155

PubMed Abstract | CrossRef Full Text | Google Scholar

Candidi, M., Urgesi, C., Ionta, S., and Aglioti, S. M. (2008). Virtual lesion of ventral premotor cortex impairs visual perception of biomechanically possible but not impossible actions. Soc. Neurosci. 3, 388–400. doi: 10.1080/17470910701676269

PubMed Abstract | CrossRef Full Text | Google Scholar

Candini, M., Farinelli, M., Ferri, F., Avanzi, S., Cevolani, D., Gallese, V., et al. (2016). Implicit and explicit routes to recognize the own body: evidence from brain damaged patients. Front. Hum. Neurosci. 10:405. doi: 10.3389/fnhum.2016.00405

PubMed Abstract | CrossRef Full Text | Google Scholar

Cazzato, V., Mele, S., and Urgesi, C. (2016a). Different contributions of visual and motor brain areas during liking judgments of same- and different-gender bodies. Brain Res. 1646, 98–108. doi: 10.1016/j.brainres.2016.05.047

PubMed Abstract | CrossRef Full Text | Google Scholar

Cazzato, V., Mian, E., Serino, A., Mele, S., and Urgesi, C. (2015). Distinct contributions of extrastriate body area and temporoparietal junction in perceiving one’s own and others’ body. Cogn. Affect. Behav. Neurosci. 15, 211–228. doi: 10.3758/s13415-014-0312-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Cazzato, V., Siega, S., and Urgesi, C. (2012). What women like”: influence of motion and form on esthetic body perception. Front. Psychol. 3:235. doi: 10.3389/fpsyg.2012.00235

PubMed Abstract | CrossRef Full Text | Google Scholar

Cazzato, V., Mian, E., Mele, S., Tognana, G., Todisco, P., et al. (2016b). The effects of body exposure on self-body image and esthetic appreciation in anorexia nervosa. Exp. Brain Res. 234, 695–709. doi: 10.1007/s00221-015-4498-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Chancel, M., and Ehrsson, H. H. (2020). Which hand is mine? Discriminating body ownership perception in a two-alternative forced-choice task. Atten. Percept. Psychophys. 82, 4058–4083. doi: 10.3758/s13414-020-02107-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Convento, S., Romano, D., Maravita, A., and Bolognini, N. (2018). Roles of the right temporo-parietal and premotor cortices in self-location and body ownership. Eur. J. Neurosci. 47, 1289–1302. doi: 10.1111/ejn.13937

PubMed Abstract | CrossRef Full Text | Google Scholar

Cornelissen, K. K., Cornelissen, P. L., Hancock, P. J. B., and Tovée, M. J. (2016). Fixation patterns, not clinical diagnosis, predict body size over-estimation in eating disordered women and healthy controls. Int. J. Eat. Dis. 49, 507–518. doi: 10.1002/eat.22505

PubMed Abstract | CrossRef Full Text | Google Scholar

Cornelissen, K. K., McCarty, K., Cornelissen, P. L., and Tovée, M. J. (2017). Body size estimation in women with anorexia nervosa and healthy controls using 3D avatars. Sci. Rep. 7:15773. doi: 10.1038/s41598-017-15339-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Cornelissen, P. L., Johns, A., and Tovée, M. J. (2013). Body size over-estimation in women with anorexia nervosa is not qualitatively different from female controls. Body Image 10, 103–111. doi: 10.1016/j.bodyim.2012.09.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Crucianelli, L., Paloyelis, Y., Ricciardi, L., Jenkinson, P. M., and Fotopoulou, A. (2019). Embodied precision: intranasal oxytocin modulates multisensory integration. J. Cogn. Neurosci. 31, 592–606. doi: 10.1162/jocn_a_01366

CrossRef Full Text | Google Scholar

Dakanalis, A., Gaudio, S., Serino, S., Clerici, M., Carrà, G., and Riva, G. (2016). Body-image distortion in anorexia nervosa. Nat. Rev. Disease Primers 2:16026. doi: 10.1038/nrdp.2016.26

CrossRef Full Text | Google Scholar

Dakanalis, A., Manzoni, G. M., Castelnuovo, G., Riva, G., and Clerici, M. (2017). Towards novel paradigms for treating dysfunctional bodily experience in eating disorders. Eat. Weight Disord. 22, 373–375. doi: 10.1007/s40519-017-0361-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Davison, T. E. (2012). “Body image in social contexts,” in Encyclopedia of Body Image and Human Appearance, Vol. 1, ed. T. F. Cash (Amsterdam: Elsevier), 243–249. doi: 10.1016/B978-0-12-384925-0.00023-7

CrossRef Full Text | Google Scholar

De Coster, L., Sánchez-Herrero, P., Aliaga, C., Otaduy, M. A., López-Moreno, J., and Tajadura-Jiménez, A. (2020). Perceived match between own and observed models’ bodies: influence of face, viewpoints, and body size. Sci. Rep. 10:13991. doi: 10.1038/s41598-020-70856-8

PubMed Abstract | CrossRef Full Text | Google Scholar

della Gatta, F., Garbarini, F., Puglisi, G., Leonetti, A., Berti, A., and Borroni, P. (2016). Decreased motor cortex excitability mirrors own hand disembodiment during the rubber hand illusion. ELife 5:e14972. doi: 10.7554/eLife.14972

PubMed Abstract | CrossRef Full Text | Google Scholar

Ducheneaut, N., Wen, M.-H., Yee, N., and Wadley, G. (2009). “Body and mind: a study of avatar personalization in three virtual worlds,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, (New York NY), 1151–1160. doi: 10.1145/1518701.1518877

CrossRef Full Text | Google Scholar

Dummer, T., Picot-Annand, A., Neal, T., and Moore, C. (2009). Movement and the Rubber Hand Illusion. Perception 38, 271–280. doi: 10.1068/p5921

PubMed Abstract | CrossRef Full Text | Google Scholar

Ehrsson, H. H. (2007). The experimental induction of out-of-body experiences. Science 317:1048. doi: 10.1126/science.1142175

PubMed Abstract | CrossRef Full Text | Google Scholar

Ehrsson, H. H. (2012). “The concept of body ownership and its relation to multisensory integration,” in The New Handbook of Multisensory Process, ed. B. E. Stein (Cambridge, MA: MIT Press), 775–792.

Google Scholar

Ehrsson, H. H., and Chancel, M. (2019). Premotor cortex implements causal inference in multisensory own-body perception. Proc. Natl. Acad. Sci.U.S.A. 116, 19771–19773. doi: 10.1073/pnas.1914000116

PubMed Abstract | CrossRef Full Text | Google Scholar

Ehrsson, H. H., Holmes, N. P., and Passingham, R. E. (2005). Touching a rubber hand: feeling of body ownership is associated with activity in multisensory brain areas. The J. Neurosci. 25, 10564–10573. doi: 10.1523/JNEUROSCI.0800-05.2005

PubMed Abstract | CrossRef Full Text | Google Scholar

Ehrsson, H. H., Spence, C., and Passingham, R. E. (2004). That’s my hand! Activity in premotor cortex reflects feeling of ownership of a limb. Science (New York, N.Y.) 305, 875–877. doi: 10.1126/science.1097011

PubMed Abstract | CrossRef Full Text | Google Scholar

Eshkevari, E., Rieger, E., Longo, M. R., Haggard, P., and Treasure, J. (2012). Increased plasticity of the bodily self in eating disorders. Psychol. Med. 42, 819–828. doi: 10.1017/S0033291711002091

PubMed Abstract | CrossRef Full Text | Google Scholar

Fairhurst, M. T., Travers, E., Hayward, V., and Deroy, O. (2018). Confidence is higher in touch than in vision in cases of perceptual ambiguity. Sci. Rep. 8:15604. doi: 10.1038/s41598-018-34052-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Ferri, F., Frassinetti, F., Costantini, M., and Gallese, V. (2011). Motor simulation and the bodily self. PLoS One 6:e17927. doi: 10.1371/journal.pone.0017927

PubMed Abstract | CrossRef Full Text | Google Scholar

Fikkan, J. L., and Rothblum, E. D. (2012). Is fat a feminist Issue? Exploring the gendered nature of weight bias. Sex Roles 66, 575–592. doi: 10.1007/s11199-011-0022-5

CrossRef Full Text | Google Scholar

Fogassi, L., Gallese, V., Fadiga, L., Luppino, G., Matelli, M., and Rizzolatti, G. (1996). Coding of peripersonal space in inferior premotor cortex (area F4). J. Neurophysiol. 76, 141–157. doi: 10.1152/jn.1996.76.1.141

PubMed Abstract | CrossRef Full Text | Google Scholar

Gadsby, S. (2017). Distorted body representations in anorexia nervosa. Conscious. Cogn. 51, 17–33. doi: 10.1016/j.concog.2017.02.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Gallagher, S. (2000). Philosophical conceptions of the self: implications for cognitive science. Trends Cogn. Sci. 4, 14–21. doi: 10.1016/S1364-6613(99)01417-5

CrossRef Full Text | Google Scholar

Gallagher, S., and Daly, A. (2018). Dynamical relations in the self-pattern. Front. Psychol. 9:664.

Google Scholar

Gallup, G. G. (1970). Chimpanzees: self-recognition. Science 167, 86–87. doi: 10.1126/science.167.3914.86

PubMed Abstract | CrossRef Full Text | Google Scholar

Garbarini, F., Pia, L., Piedimonte, A., Rabuffetti, M., Gindri, P., and Berti, A. (2013). Embodiment of an alien hand interferes with intact-hand movements. Curr. Biol. 23, R57–R58. doi: 10.1016/j.cub.2012.12.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Gardner, R. M., and Brown, D. L. (2014). Body size estimation in anorexia nervosa: a brief review of findings from 2003 through 2013. Psychiatry Res. 219, 407–410. doi: 10.1016/j.psychres.2014.06.029

PubMed Abstract | CrossRef Full Text | Google Scholar

Goodwin, G. M., McCloskey, D. I., and Matthews, P. B. C. (1972). Proprioceptive illusions induced by muscle vibration: contribution by muscle spindles to perception? Science 175, 1382–1384.

Google Scholar

Graziano, M. S., Yap, G. S., and Gross, C. G. (1994). Coding of visual space by premotor neurons. Science 266, 1054–1057. doi: 10.1126/science.7973661

PubMed Abstract | CrossRef Full Text | Google Scholar

Grossman, E. D., and Blake, R. (2002). Brain Areas active during visual perception of biological motion. Neuron 35, 1167–1175. doi: 10.1016/S0896-6273(02)00897-8

CrossRef Full Text | Google Scholar

Hara, M., Pozeg, P., Rognini, G., Higuchi, T., Fukuhara, K., Yamamoto, A., et al. (2015). Voluntary self-touch increases body ownership. Front. Psychol. 6:1509.

Google Scholar

Hartmann, C., and Siegrist, M. (2015). A longitudinal study of the relationships between the Big Five personality traits and body size perception. Body Image 14, 67–71. doi: 10.1016/j.bodyim.2015.03.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Hashimoto, T., and Iriki, A. (2013). Dissociations between the horizontal and dorsoventral axes in body-size perception. Eur. J. Neurosci. 37, 1747–1753. doi: 10.1111/ejn.12187

PubMed Abstract | CrossRef Full Text | Google Scholar

He, J., Sun, S., Zickgraf, H. F., Lin, Z., and Fan, X. (2020). Meta-analysis of gender differences in body appreciation. Body Image 33, 90–100. doi: 10.1016/j.bodyim.2020.02.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Hennighausen, K., Enkelmann, D., Wewetzer, C., and Remschmidt, H. (1999). Body image distortion in Anorexia Nervosa–is there really a perceptual deficit? European Child and Adolescent Psychiatry 8, 200–206. doi: 10.1007/s007870050130

PubMed Abstract | CrossRef Full Text | Google Scholar

Irvine, K. R., McCarty, K., Pollet, T. V., Cornelissen, K. K., Tovée, M. J., and Cornelissen, P. L. (2019). The visual cues that drive the self-assessment of body size: dissociation between fixation patterns and the key areas of the body for accurate judgement. Body Image 29, 31–46. doi: 10.1016/j.bodyim.2019.02.006

PubMed Abstract | CrossRef Full Text | Google Scholar

JASP Team (2020). JASP (Version 0.12.2)[Computer software]. Available online at: https://jasp-stats.org/ (accessed June 2020).

Google Scholar

Jin, S.-A. A. (2010). “I feel more connected to the physically ideal mini me than the mirror-image mini me”: theoretical implications of the “malleable self” for speculations on the effects of avatar creation on avatar–self connection in wii. Cyberpsychol. Behav. Soc. Netw. 13, 567–570. doi: 10.1089/cyber.2009.0243

PubMed Abstract | CrossRef Full Text | Google Scholar

Johnson, F., Cooke, L., Croker, H., and Wardle, J. (2008). Changing perceptions of weight in Great Britain: comparison of two population surveys. BMJ 337, a494. doi: 10.1136/bmj.a494

PubMed Abstract | CrossRef Full Text | Google Scholar

Kable, J. W., and Chatterjee, A. (2006). Specificity of action representations in the lateral occipitotemporal cortex. J. Cogn. Neurosci. 18, 1498–1517. doi: 10.1162/jocn.2006.18.9.1498

PubMed Abstract | CrossRef Full Text | Google Scholar

Kadambi, A., and Lu, H. (2018). Individual differences in self-recognition from body movements. J. Vision 18:1039. doi: 10.1167/18.10.1039

CrossRef Full Text | Google Scholar

Kaplan, R. A., Rossell, S. L., Enticott, P. G., and Castle, D. J. (2013). Own-body perception in body dysmorphic disorder. Cogn. Neuropsychiatry 18, 594–614. doi: 10.1080/13546805.2012.758878

PubMed Abstract | CrossRef Full Text | Google Scholar

Keizer, A., Smeets, M. A. M., Postma, A., van Elburg, A., and Dijkerman, H. C. (2014). Does the experience of ownership over a rubber hand change body size perception in anorexia nervosa patients? Neuropsychologia 62, 26–37. doi: 10.1016/j.neuropsychologia.2014.07.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Kilteni, K., Maselli, A., Kording, K. P., and Slater, M. (2015). Over my fake body: body ownership illusions for studying the multisensory basis of own- body perception. Front. Hum. Neurosci. 9:141. doi: 10.3389/fnhum.2015.00141

PubMed Abstract | CrossRef Full Text | Google Scholar

Kim, J., and Forsythe, S. (2008). Adoption of virtual try-on technology for online apparel shopping. J. Interactive Mark. 22, 45–59. doi: 10.1002/DIR.20113

CrossRef Full Text | Google Scholar

Latoschik, M. E., Roth, D., Gall, D., Achenbach, J., Waltemate, T., and Botsch, M. (2017). “The Effect of avatar realism in immersive social virtual realities,” in Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, (New York NY), doi: 10.1145/3139131.3139156

CrossRef Full Text | Google Scholar

Lenggenhager, B., Tadi, T., Metzinger, T., and Blanke, O. (2007). Video ergo sum: manipulating bodily self-consciousness. Science 317, 1096–1099. doi: 10.1126/science.1143439

PubMed Abstract | CrossRef Full Text | Google Scholar

Linkenauger, S. A., Kirby, L. R., McCulloch, K. C., and Longo, M. R. (2017). People watching: the perception of the relative body proportions of the self and others. Cortex 92, 1–7. doi: 10.1016/J.CORTEX.2017.03.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Longo, M. R., and Haggard, P. (2010). An implicit body representation underlying human position sense. Proc. Natl. Acad. Sci.U.S.A. 107, 11727–11732. doi: 10.1073/pnas.1003483107

PubMed Abstract | CrossRef Full Text | Google Scholar

Longo, M. R., and Haggard, P. (2011). Weber’s illusion and body shape: anisotropy of tactile size perception on the hand. J. Exp. Psychol. Hum. Percept. Perform. 37, 720–726. doi: 10.1037/a0021921

PubMed Abstract | CrossRef Full Text | Google Scholar

Longo, M. R., and Haggard, P. (2012). Implicit body representations and the conscious body image. Acta Psychol. 141, 164–168. doi: 10.1016/J.ACTPSY.2012.07.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., and Black, M. J. (2015). SMPL: a skinned multi-person linear model. ACM Trans. Graph. 34:248. doi: 10.1145/2816795.2818013

CrossRef Full Text | Google Scholar

Maister, L., De Beukelaer, S., Longo, M., and Tsakiris, M. (2021). The Self in the Mind’s eye: revealing how we truly see ourselves through reverse correlation. Psychol. Sci.

Google Scholar

Malighetti, C., Gaudio, S., Di Lernia, D., Matamala-Gomez, M., and Riva, G. (2020). Altered inner body perception in anorexia and bulimia nervosa: a systematic review. PsyArXiv [Preprints] doi: 10.31234/osf.io/2x4em

CrossRef Full Text | Google Scholar

Martin, M. G. F. (1995). “Bodily awareness: a sense of ownership,” in The body and the Self, eds J. L. Bermúdez, A. J. Marcel, and N. Eilan (Cambridge MA: The MIT Press), 267–289.

Google Scholar

Martinaud, O., Besharati, S., Jenkinson, P. M., and Fotopoulou, A. (2017). Ownership illusions in patients with body delusions: different neural profiles of visual capture and disownership. Cortex 87, 174–185. doi: 10.1016/j.cortex.2016.09.025

PubMed Abstract | CrossRef Full Text | Google Scholar

Maselli, A., and Slater, M. (2014). Sliding perspectives: dissociating ownership from self-location during full body illusions in virtual reality. Front. Hum. Neurosci. 8:693.

Google Scholar

Mendelson, B. K., Mendelson, M. J., White, D. R., Mendelson, B. K., Mendelson, M. J., Body-, D. R. W., et al. (2010). Body-esteem scale for adolescents and adults body-esteem scale for adolescents and adults. J. Pers. Assess. 3891, 90–106. doi: 10.1207/S15327752JPA7601

CrossRef Full Text | Google Scholar

Merle, A., Senecal, S., and St-Onge, A. (2012). Whether and how virtual try-on influences consumer responses to an apparel web site. Int. J. Electr. Commerce 16, 41–64. doi: 10.2753/JEC1086-4415160302

CrossRef Full Text | Google Scholar

Mölbert, S. C., Thaler, A., Mohler, B. J., Streuber, S., Romero, J., Black, M. J., et al. (2018). Assessing body image in anorexia nervosa using biometric self-avatars in virtual reality: attitudinal components rather than visual body size estimation are distorted. Psychol. Med. 48, 642–653. doi: 10.1017/S0033291717002008

PubMed Abstract | CrossRef Full Text | Google Scholar

Monteath, S. A., and McCabe, M. P. (1997). The influence of societal factors on female body image. J. Soc. Psychol. 137, 708–727. doi: 10.1080/00224549709595493

PubMed Abstract | CrossRef Full Text | Google Scholar

Myers, A., and Sowden, P. T. (2008). Your hand or mine? The extrastriate body area. NeuroImage 42, 1669–1677. doi: 10.1016/j.neuroimage.2008.05.045

PubMed Abstract | CrossRef Full Text | Google Scholar

Narain, R., Pfaff, T., and O’Brien, J. F. (2013). Folding and crumpling adaptive sheets. ACM Trans. Graph. 32:51.

Google Scholar

Narain, R., Samii, A., and O’Brien, J. F. (2012). Adaptive anisotropic remeshing for cloth simulation. ACM Trans. Graph. 31:147.

Google Scholar

Nava, E., Bolognini, N., and Turati, C. (2017). The development of a cross-modal sense of body ownership. Psychol. Sci. 28, 330–337. doi: 10.1177/0956797616682464

PubMed Abstract | CrossRef Full Text | Google Scholar

Nava, E., Gamberini, C., Berardis, A., and Bolognini, N. (2018). Action shapes the sense of body ownership across human development. Front. Psychol. 9:2507.

Google Scholar

Peck, T. C., Seinfeld, S., Aglioti, S. M., and Slater, M. (2013). Putting yourself in the skin of a black avatar reduces implicit racial bias. Conscious. Cogn. 22, 779–787. doi: 10.1016/j.concog.2013.04.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Petersen, J. A., and Kumar, V. (2009). Are product returns a necessary Evil? Antecedents and consequences. J. Market. 73, 35–51. doi: 10.1509/jmkg.73.3.035

PubMed Abstract | CrossRef Full Text | Google Scholar

Petkova, V. I., and Ehrsson, H. H. (2008). If I were you: perceptual illusion of body swapping. PLoS One 3:e3832. doi: 10.1371/journal.pone.0003832

PubMed Abstract | CrossRef Full Text | Google Scholar

Pfaff, T., Narain, R., de Joya, J. M., and O’Brien, J. F. (2014). Adaptive tearing and cracking of thin sheets. ACM Trans. Graph. 33, 1–9.

Google Scholar

Piryankova, I. V., Wong, H. Y., Linkenauger, S. A., Stinson, C., Longo, M. R., Bülthoff, H. H., et al. (2014). Owning an overweight or underweight body: distinguishing the physical, experienced and virtual body. PLoS One 9:e103428. doi: 10.1371/journal.pone.0103428

PubMed Abstract | CrossRef Full Text | Google Scholar

Pitron, V., and de Vignemont, F. (2017). Beyond differences between the body schema and the body image: insights from body hallucinations. Conscious. Cogn. 53, 115–121. doi: 10.1016/J.CONCOG.2017.06.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Preston, C., and Ehrsson, H. H. (2014). Illusory changes in body size modulate body satisfaction in a way that is related to non-clinical eating disorder psychopathology. PLoS One 9:e85773. doi: 10.1371/journal.pone.0085773

PubMed Abstract | CrossRef Full Text | Google Scholar

R Core Team (2020). R: A Language and Environment for Statistical Computing. Vienna: R Core Team.

Google Scholar

Ralph-Nearman, C., Arevian, A. C., Puhl, M., Kumar, R., Villaroman, D., Suthana, N., et al. (2019). A novel mobile tool (Somatomap) to assess body image perception pilot tested with fashion models and nonmodels: cross-sectional study. JMIR Ment. Health 6:e14115. doi: 10.2196/14115

PubMed Abstract | CrossRef Full Text | Google Scholar

Ramachandran, V. S., and Hirstein, W. (1998). The perception of phantom limbs. The D. O. Hebb lecture. Brain 121, 1603–1630. doi: 10.1093/brain/121.9.1603

PubMed Abstract | CrossRef Full Text | Google Scholar

Rammstedt, B., and John, O. P. (2007). Measuring personality in one minute or less: a 10-item short version of the Big Five Inventory in English and German. Journal of Res. Personality 41, 203–212. doi: 10.1016/J.JRP.2006.02.001

CrossRef Full Text | Google Scholar

Rizzolatti, G., and Craighero, L. (2004). THE MIRROR-NEURON SYSTEM. Ann. Rev. Neurosci. 27, 169–192. doi: 10.1146/annurev.neuro.27.070203.144230

PubMed Abstract | CrossRef Full Text | Google Scholar

Robinson, E., and Kersbergen, I. (2017). Overweight or about right? A norm comparison explanation of perceived weight status. Obesity Sci. Pract. 3, 36–43. doi: 10.1002/osp4.89

PubMed Abstract | CrossRef Full Text | Google Scholar

Romero, C., Otaduy, M. A., Casas, D., and Perez, J. (2020). Modeling and estimation of nonlinear skin mechanics for animated avatars. Comp. Graph. Forum 39, 77–88.

Google Scholar

Saarijärvi, H., Sutinen, U.-M., and Harris, L. C. (2017). Uncovering consumers returning behaviour: a study of fashion e-commerce. Int. Rev. Retail Distrib. Consum. Res. 27, 284–299.

Google Scholar

Sadibolova, R., Ferrè, E. R., Linkenauger, S. A., and Longo, M. R. (2019). Distortions of perceived volume and length of body parts. Cortex 111, 74–86. doi: 10.1016/J.CORTEX.2018.10.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Schönberger, J. L., and Frahm, J. (2016). “Structure-from-motion revisited,” in Proceedinds of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (Las Vegas, NV), 4104–4113. doi: 10.1109/CVPR.2016.445

CrossRef Full Text | Google Scholar

Schönberger, J. L., Zheng, E., Frahm, J.-M., and Pollefeys, M. (2016). “Pixelwise view selection for unstructured multi-view stereo,” in Computer Vision – ECCV 2016, eds B. Leibe, J. Matas, N. Sebe, and M. Welling (New York NY: Springer International Publishing), 501–518.

Google Scholar

Spahlholz, J., Baer, N., König, H.-H., Riedel-Heller, S. G., and Luck-Sikorski, C. (2016). Obesity and discrimination – a systematic review and meta-analysis of observational studies. Obesity Rev. 17, 43–55. doi: 10.1111/obr.12343

PubMed Abstract | CrossRef Full Text | Google Scholar

Steinsbekk, S., Klöckner, C. A., Fildes, A., Kristoffersen, P., Rognsås, S. L., and Wichstrøm, L. (2017). Body size estimation from early to middle childhood: stability of underestimation, BMI, and gender effects. Front. Psychol. 8:2038.

Google Scholar

Stice, E., and Shaw, H. E. (2002). Role of body dissatisfaction in the onset and maintenance of eating pathology: a synthesis of research findings. J.Psychos. Res. 53, 985–993. doi: 10.1016/s0022-3999(02)00488-9

CrossRef Full Text | Google Scholar

Tajadura-Jiménez, A., Basia, M., Deroy, O., Fairhurst, M., Marquardt, N., and Bianchi-Berthouze, N. (2015). “As light as your footsteps: altering walking sounds to change perceived body weight, emotional state and gait,” in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, (New York NY), 2943–2952. doi: 10.1145/2702123.2702374

CrossRef Full Text | Google Scholar

Thaler, A., Geuss, M. N., Mölbert, S. C., Giel, K. E., Streuber, S., Romero, J., et al. (2018). Body size estimation of self and others in females varying in BMI. PLoS One 13:e0192152. doi: 10.1371/journal.pone.0192152

PubMed Abstract | CrossRef Full Text | Google Scholar

Tovée, M. J., Benson, P. J., Emery, J. L., Mason, S. M., and Cohen-Tovée, E. M. (2003). Measurement of body size and shape perception in eating-disordered and control observers using body-shape software. Br. J. Psychol. (London, England: 1953) 94(Pt 4), 501–516. doi: 10.1348/000712603322503060

PubMed Abstract | CrossRef Full Text | Google Scholar

Urgesi, C., Candidi, M., Ionta, S., and Aglioti, S. M. (2007). Representation of body identity and body actions in extrastriate body area and ventral premotor cortex. Nat. Neurosci. 10, 30–31. doi: 10.1038/nn1815

PubMed Abstract | CrossRef Full Text | Google Scholar

Urgesi, C., Moro, V., Candidi, M., and Aglioti, S. M. (2006). Mapping implied body actions in the human motor system. J. Neurosci. 26, 7942–7949. doi: 10.1523/JNEUROSCI.1289-06.2006

PubMed Abstract | CrossRef Full Text | Google Scholar

Vallar, G., and Ronchi, R. (2009). Somatoparaphrenia: a body delusion. A review of the neuropsychological literature. Exp. Brain Res. 192, 533–551. doi: 10.1007/s00221-008-1562-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Varol, G., Romero, J., Martin, X., Mahmood, N., Black, M. J., Laptev, I., et al. (2017). “Learning from synthetic humans,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017)CVPR, (Piscataway NJ).

Google Scholar

Wagenmakers, E.-J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., Love, J., et al. (2018). Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications. Psychon. Bull. Rev. 25, 35–57. doi: 10.3758/s13423-017-1343-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, H., Ramamoorthi, R., and O’Brien, J. F. (2011). Data-driven elastic models for cloth: modeling and measurement. ACM Trans. Graph. 71, 1–11.

Google Scholar

Keywords: body representation, body perception, bodily self-awareness, movement, self-esteem, avatar

Citation: De Coster L, Sánchez-Herrero P, López-Moreno J and Tajadura-Jiménez A (2021) The Perceived Match Between Observed and Own Bodies, but Not Its Accuracy, Is Influenced by Movement Dynamics and Clothing Cues. Front. Hum. Neurosci. 15:701872. doi: 10.3389/fnhum.2021.701872

Received: 28 April 2021; Accepted: 07 July 2021;
Published: 28 July 2021.

Edited by:

Laura Crucianelli, Karolinska Institutet, Sweden

Reviewed by:

Valentina Cazzato, Liverpool John Moores University, United Kingdom
Dalila Burin, Tohoku University School of Medicine, Japan

Copyright © 2021 De Coster, Sánchez-Herrero, López-Moreno and Tajadura-Jiménez. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Lize De Coster, lcoster@inf.uc3m.es; Ana Tajadura-Jiménez, atajadur@inf.uc3m.es

Download