Edited by: Mohamed Chetouani, Pierre-and-Marie-Curie University, France
Reviewed by: Maria Koutsombogera, Institute for Language and Speech Processing, Greece; Andrej Košir, University of Ljubljana, Slovenia
Specialty section: This article was submitted to Human-Media Interaction, a section of the journal Frontiers in ICT
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Most studies investigating the processing of emotions in depressed patients reported impairments in the decoding of negative emotions. However, these studies adopted static stimuli (mostly stereotypical facial expressions corresponding to basic emotions) which do not reflect the way people experience emotions in everyday life. For this reason, this work proposes to investigate the decoding of emotional expressions in patients affected by recurrent major depressive disorder (RMDD) using dynamic audio/video stimuli. RMDDs’ performance is compared with the performance of patients with adjustment disorder with depressed mood (ADs) and healthy (HCs) subjects. The experiments involve 27 RMDDs (16 with acute depression – RMDD-A and 11 in a compensation phase – RMDD-C), 16 Ads, and 16 HCs. The ability to decode emotional expressions is assessed through an emotion recognition task based on short audio (without video), video (without audio), and audio/video clips. The results show that AD patients are significantly less accurate than HCs in decoding fear, anger, happiness, surprise, and sadness. RMDD-As with acute depression are significantly less accurate than HCs in decoding happiness, sadness, and surprise. Finally, no significant differences were found between HCs and RMDD-Cs in a compensation phase. The different communication channels and the types of emotion play a significant role in limiting the decoding accuracy.
Accurate processing of emotional information is an important social skill allowing one to correctly decode others’ verbal and non-verbal emotional expressions, to provide appropriate affective feedback, and to adopt consistent social behaviors (Esposito,
Results from the abovementioned studies show that subjects with depression exhibit a selective attention toward faces expressing negative emotions when stimuli last more than 1 s. This suggests that abnormal emotional information processing during depression is due to cognitive rather than attentional processes (Gotlib et al.,
Although many studies confirm a bias toward negative emotions, others find a deficit in the decoding of positive emotions, especially happiness (Gur et al.,
The common characteristic of these studies is that they investigate the ability of depressed subjects to decode emotional expressions through the visual channel, using photos. This has two main limitations: (1) static stimuli do not reflect the way people experience emotions in their everyday life (facial emotional expressions are dynamic and are often accompanied by vocalizations and/or speech) and (2) photos of emotional faces are taken at high-intensity emotional levels and do not correspond to everyday life expressed emotions. On the contrary, multimodal dynamic stimuli have greater ecological validity and allow to investigate the amount of emotional information conveyed not only by the visual channel but also by the auditory one and by the combination of visual and auditory signals. Nevertheless, a few studies, only recently, exploit dynamic emotional stimuli to investigate on the depressed subjects’ ability to decode emotional expressions using either audio or audio/video stimuli (Kan et al.,
In summary, the studies investigating the ability of depressed subjects to decode emotional stimuli are numerous and report different results. These differences may be attributed to the use of different methodologies in assessing the accuracy of depressed subjects, to the type of stimuli, the characteristics of the participants, and the different clinical states and depression degrees. Given various data, it remains an open issue whether depressed subjects exhibit a global or a specific emotional bias in decoding emotional expressions, as well as whether their clinical state (i.e., the acute or compensation phase) and depression degrees play a role in their performance. In addition, it is of interest to assess the role of the communication channels in conveying emotional information.
This study aims at clarifying these issues through the analysis of how people with depression decode emotional displays. The goal is to explore the MDDs’ ability to decode emotional multimodal dynamic stimuli and to match their performance with both AD and HC subjects. Our hypotheses are:
Recurrent major depressive disorder (RMDD) patients should show a negative bias (that is an emotional recognition deficit) toward specific basic emotions; The bias is more evident when depressive symptoms are severe. Thus, the performance of acutely depressed patients should be worse than the performance of compensated ones; The bias is independent of the communication mode. Thus, it should appear in the visual, auditory, and visual/auditory stimuli.
Dynamic stimuli either in visual or auditory or both visual and auditory form are used, extracted from Italian movies, and therefore embedded in the movie script context to increase the naturalness and ecological validity of the experiment.
Four groups of participants took part in this study:
Outpatients with recurrent major depression in acute phase (RMDD-A). The initial group consisted of 20 subjects. A 51-year-old man was excluded for hearing impairments; a 39-year-old man was excluded because it was possible that the depressed mood was associated with brain surgery; a 38-year-old woman was excluded because she had not yet started the drug therapy; a 64-year-old woman was excluded because the psychiatrist did not provide her clinical history questionnaire. The final group consisted of 16 outpatients (10 males and 6 females; mean age = 53.3; SD = 9.8); 11 outpatients (3 males and 8 females, mean age = 48.8; SD = 11.9) with recurrent major depression in compensation phase (RMDD-C); Outpatients with adjustment disorder with depressed mood (AD). The initial group consisted of 18 subjects. A 55-year-old man and a 66-year-old woman were excluded because they just started to take medications. The final group consisted of 16 outpatients (5 males and 11 females; mean age = 54.5; SD = 9.5); Healthy control (HC): the initial group consisted of 18 subjects. A 38-year-old man and a 60-year-old woman were excluded because they were under anxiolytics. The final group consisted of 16 subjects (6 males and 10 females; mean age = 52; SD = 13.3).
A MDD is a “
The three groups of patients (RMDD-A, RMDD-C, and AD) were recruited at the Mental Health Service of Avellino, Italy. They received a diagnosis of Recurrent Major Depression Disorder and Adjustment Disorder with Depressed Mood according to DSM-IV criteria (American Psychiatric Association,
Groups |
||||||||
---|---|---|---|---|---|---|---|---|
RMDD-A |
RMDD-C |
AD |
HC |
|||||
Mean | SD | Mean | SD | Mean | SD | Mean | SD | |
Age | 53.3 | 9.8 | 48.8 | 11.9 | 54.5 | 9.5 | 52.0 | 13.3 |
Years of education |
2.6 | 0.7 | 2.7 | 0.5 | 3.0 | 0.8 | 2.7 | 0.9 |
BDI-II |
37.4 | 10.3 | 9.1 | 6.0 | 29.2 | 13.1 | 2.6 | 3.3 |
Duration of treatment (in years) | 7.4 | 5.4 | 4.8 | 5.7 | 3.8 | 3.2 |
For each RMDD-A, RMDD-C, and AD patient, the psychiatrists of the Mental Health Service Center have provided the clinical history (diagnosis, type of drugs, and duration of treatment). Patients were excluded from the experiment if: (a) the depressed mood was associated with other disorders (e.g., personality disorders, psychosis, alcoholism, cognitive decline, or hearing impairment). The only exception is anxiety because it is often associated with depression; or (b) the period during which the patient was under drug therapy was shorter than 1 year.
The Italian version of the Beck Depression Inventory Second Edition (BDI II; Beck et al.,
The emotion recognition task consisted of 60 emotional stimuli grouped in 20 videotaped facial expressions (without audio), 20 audiotaped vocal expressions (without video), and 20 audio/video recordings, all selected from the COST 2102 Italian Emotional database, which consists of 216 emotional video-clips extracted from Italian speaking movies (Esposito et al.,
The recordings’ duration was kept short (between 2 and 3.5 s, the average stimulus’ length was 2.5 s, SD = ±1 s) to avoid overlaps of different emotions. The 60 stimuli selected from the abovementioned database were among those who received the greater raters’ agreement (more than the 70% of agreements).
Informed consent forms were signed by the participants after the study had been described to them. The tasks were administered to the subjects individually in a quiet room. Each participant first completed the BDI-II and then the emotion recognition task. No time limit was given to complete the task. The stimuli were presented one by one on a PC monitor. After the presentation of each stimulus, subjects’ were asked to label it as happiness, fear, anger, surprise, sadness, a different emotion, or no emotion, selecting the option that best described (for her/him) the emotional state acted in the audio, visual, or audio/video stimulus. After each labeling, they moved to the successive stimulus. The administration procedure lasted for approximately 30 min.
The following statistical tests were performed to assess the collected data. Three one-way analyses of variance (ANOVA) analyses were performed to evaluate whether there were significant differences among groups in along age, BDI-II scores, and treatment duration (see
The Pearson Correlation coefficient was used to evaluate correlations between, on the one hand, answers to the emotion recognition task and, on the other hand, BDI-II scores (see
A repeated measures ANOVA (5 × 3) on the number of correct responses was separately performed to assess each involved group ability to decode the emotional stimuli (the five abovementioned basic emotions) portrayed through the three different communication modes (within-subject factor) – (see
One-way ANOVA analyses were performed to assess differences between groups on communication modes and emotional categories (see
A repeated measure ANOVA (5 × 3 × 4) was performed, with emotions (the five basic emotions) and communication modes (audio, mute video, and combined audio/video) as within-subject factors, and groups (RMDD-As, RMDD-Cs, ADs, and HCs) as between-subject factor. In addition, separate repeated measures ANOVA were performed. In particular, an ANOVA 5 × 3 × 3 to compare RMDD-A and RMDD-C performance with HC subjects, and an ANOVA 5 × 3 × 2 to match only RMDD-As and RMDD-Cs performances (see
Confusion matrices were computed on the percentage of correct responses for each communication mode, to assess misperceptions among emotion categories (see
The four groups (RMDD-A, RMDD-C, AD, and HC) did not differ significantly in age [
A significant difference among the four groups was found (as expected) for the BDI-II scores [
No correlation (
Groups | Emotion recognition task |
BDI-II score |
Pearson correlation | |||
---|---|---|---|---|---|---|
Mean | SD | Mean | SD | |||
RMDD-A | 41.4 | 6.75 | 37.4 | 10.26 | −0.34 | 0.19 |
RMDD-C | 42.7 | 12.08 | 9.1 | 6.10 | 0.06 | 0.86 |
AD | 36.2 | 8.44 | 29.2 | 13.09 | −0.20 | 0.44 |
HC | 48.6 | 4.01 | 2.6 | 3.40 | 0.09 | 0.71 |
No correlation (
Groups | Emotion recognition task |
Average treatment duration |
Pearson correlation | |||
---|---|---|---|---|---|---|
Mean | SD | Mean | SD | |||
RMDD-A | 41.4 | 6.75 | 7.4 | 5.43 | −0.008 | 0.97 |
RMDD-C | 42.7 | 12.08 | 4.8 | 5.7 | −0.41 | 0.21 |
AD | 36.2 | 8.44 | 3.8 | 3.3 | 0.25 | 0.34 |
Figure
Table
Communication mode | Emotion | Groups |
|||||||
---|---|---|---|---|---|---|---|---|---|
RMDD-Am |
RMDD-Cn |
ADo |
HCp |
||||||
Mean | SD | Mean | SD | Mean | SD | Mean | SD | ||
Audio | Happiness | 2.75 | 0.30 | 2.55 | 0.37 | 3.00 | 0.30 | 3.31 | 0.30 |
Fear | 3.06a | 0.21 | 2.82 | 0.25 | 2.25 | 0.21 | 3.06 | 0.21 | |
Anger | 2.69 | 0.28 | 2.45 | 0.33 | 2.06 | 0.28 | 3.13 | 0.28 | |
Surprise | 2.06b | 0.28 | 2.82 | 0.34 | 2.06 | 0.28 | 3.38 | 0.28 | |
Sadness | 2.81 | 0.27 | 2.36 | 0.32 | 2.25 | 0.27 | 2.81 | 0.27 | |
Mute video | Happiness | 2.63 | 0.25 | 3.00 | 0.31 | 2.63 | 0.25 | 3.69l | 0.25 |
Fear | 3.19c | 0.18 | 3.09f | 0.22 | 3.00h | 0.18 | 3.56l | 0.18 | |
Anger | 3.31d | 0.27 | 3.18 | 0.33 | 2.44 | 0.27 | 3.50l | 0.27 | |
Surprise | 2.19e | 0.28 | 2.18g | 0.33 | 1.94i | 0.28 | 2.00k | 0.28 | |
Sadness | 2.69 | 0.24 | 2.82 | 0.29 | 2.00j | 0.24 | 3.31l | 0.24 | |
Audio/video | Happiness | 2.69 | 0.27 | 3.18 | 0.32 | 2.81 | 0.27 | 3.31 | 0.27 |
Fear | 2.44 | 0.29 | 2.73 | 0.35 | 2.25 | 0.29 | 3.00 | 0.29 | |
Anger | 3.31 | 0.22 | 3.73 | 0.26 | 2.63 | 0.22 | 3.63 | 0.22 | |
Surprise | 3.06 | 0.25 | 2.82 | 0.30 | 2.69 | 0.25 | 3.44 | 0.25 | |
Sadness | 2.50 | 0.29 | 3.00 | 0.35 | 2.19 | 0.29 | 3.44 | 0.29 |
The repeated measures ANOVA (5 × 3) performed to assess the ability of each group to decode emotional stimuli portrayed through the audio, mute video, and combined audio/video show that:
RMDD-As did not show significant differences among emotional categories [ a significant difference in the audio between surprise and fear ( a significant difference in the mute video between surprise and fear ( RMDD-Cs did not show significant differences among emotional categories [ ADs did not show significant differences among emotional categories [ HCs did not show significant differences among communication modes [
A one-way ANOVA on each emotional category, independently from the communication mode, shows that there are significant differences among the groups for fear [
A one-way ANOVA on each communication mode, independently from the emotional categories, shows significant differences among the groups for the audio [
The repeated measures ANOVA (5 × 3 × 4) shows as main effects:
A significant difference among the groups [ A significant effect was found among emotional categories [ A significant effect was found among the communication modes [
The ANOVA analysis also reports significant interactions between (a) emotions and communication modes [ The significant interaction between emotions and communication modes indicates that the emotion decoding accuracy depends on the communication mode. More specifically, There are no significant differences among communication modes for the recognition accuracy of There are significant differences in the recognition accuracy of There are significant differences in the recognition accuracy of There are significant differences in the recognition accuracy of The interaction between emotions × communication modes × groups indicates that the emotion decoding accuracy depends, on both the communication mode and the involved groups. More specifically, a significant difference for a significant difference for indicating that ADs are less accurate than HCs (see mean scores in Table a significant difference for the mute video between ADs and HCs [ the audio/video [ indicating that ADs are less accurate than HCs and RMDD-Cs in the decoding of anger (see mean scores in Table a significant difference for a significant difference for
From the above analyses, it clearly emerges that ADs are less accurate with respect to the other participating groups, no matter the emotion category and communication mode.
A 5 × 4 × 3 repeated measures ANOVA was made in order to finely assess possible other differences between RMDD-As and RMDD-Cs, as well as HCs. Results showed a significant difference between RMDD-As and HCs for happiness in the mute video [
No significant differences were found between RMDD-As and RMDD-Cs [
Table
Happiness | 0 | 0 | 16 | 2 | 8 | 6 | Happiness | 0 | 0 | 18 | 0 | 7 | 11 | ||
Fear | 0 | 11 | 3 | 8 | 2 | 0 | Fear | 0 | 9 | 11 | 2 | 2 | 5 | ||
Anger | 0 | 14 | 5 | 5 | 6 | 3 | Anger | 0 | 7 | 0 | 5 | 9 | 5 | ||
Surprise | 20 | 5 | 2 | 0 | 17 | 5 | Surprise | 9 | 0 | 2 | 2 | 9 | 7 | ||
Sadness | 3 | 2 | 5 | 5 | 9 | 6 | Sadness | 5 | 5 | 0 | 7 | 16 | 9 | ||
Happiness | 0 | 0 | 17 | 2 | 5 | 2 | Happiness | 0 | 0 | 11 | 0 | 3 | 3 | ||
Fear | 0 | 9 | 19 | 5 | 9 | 2 | Fear | 0 | 2 | 8 | 3 | 11 | 0 | ||
Anger | 2 | 11 | 6 | 11 | 17 | 2 | Anger | 0 | 6 | 2 | 5 | 6 | 3 | ||
Surprise | 19 | 6 | 2 | 6 | 11 | 5 | Surprise | 9 | 0 | 2 | 0 | 3 | 2 | ||
Sadness | 3 | 3 | 6 | 6 | 17 | 8 | Sadness | 2 | 3 | 0 | 0 | 22 | 3 | ||
Happiness | 0 | 5 | 6 | 2 | 17 | 5 | Happiness | 0 | 5 | 9 | 2 | 5 | 5 | ||
Fear | 3 | 6 | 0 | 5 | 6 | 0 | Fear | 2 | 5 | 7 | 5 | 2 | 2 | ||
Anger | 0 | 3 | 3 | 5 | 5 | 2 | Anger | 0 | 11 | 2 | 2 | 0 | 5 | ||
Surprise | 5 | 2 | 13 | 3 | 14 | 9 | Surprise | 0 | 0 | 23 | 5 | 14 | 11 | ||
Sadness | 2 | 0 | 6 | 5 | 11 | 9 | Sadness | 0 | 2 | 0 | 2 | 14 | 11 | ||
Happiness | 2 | 3 | 20 | 2 | 8 | 0 | Happiness | 0 | 2 | 5 | 0 | 2 | 0 | ||
Fear | 2 | 11 | 11 | 2 | 0 | 0 | Fear | 0 | 2 | 2 | 3 | 3 | 2 | ||
Anger | 0 | 9 | 8 | 5 | 13 | 3 | Anger | 0 | 6 | 2 | 0 | 5 | 0 | ||
Surprise | 3 | 8 | 16 | 8 | 13 | 5 | Surprise | 8 | 2 | 20 | 3 | 14 | 3 | ||
Sadness | 0 | 11 | 8 | 11 | 9 | 11 | Sadness | 0 | 3 | 3 | 0 | 11 | 0 | ||
Happiness | 0 | 2 | 9 | 2 | 20 | 0 | Happiness | 0 | 0 | 14 | 0 | 0 | 7 | ||
Fear | 0 | 6 | 5 | 11 | 14 | 3 | Fear | 0 | 5 | 2 | 5 | 11 | 9 | ||
Anger | 0 | 3 | 0 | 5 | 8 | 2 | Anger | 0 | 2 | 2 | 2 | 0 | 0 | ||
Surprise | 2 | 0 | 9 | 3 | 6 | 3 | Surprise | 2 | 5 | 5 | 0 | 2 | 14 | ||
Sadness | 0 | 6 | 8 | 5 | 17 | 2 | Sadness | 0 | 0 | 2 | 5 | 9 | 9 | ||
Happiness | 0 | 0 | 23 | 0 | 6 | 0 | Happiness | 0 | 0 | 9 | 0 | 8 | 0 | ||
Fear | 0 | 11 | 13 | 9 | 8 | 3 | Fear | 0 | 0 | 6 | 5 | 13 | 2 | ||
Anger | 0 | 14 | 2 | 11 | 6 | 2 | Anger | 0 | 5 | 0 | 0 | 5 | 0 | ||
Surprise | 8 | 0 | 5 | 3 | 13 | 5 | Surprise | 2 | 2 | 3 | 2 | 6 | 0 | ||
Sadness | 0 | 9 | 13 | 9 | 11 | 3 | Sadness | 0 | 5 | 2 | 0 | 6 | 0 |
It is worth to discuss, at this stage, the confusion matrices reported in Table Happiness, in the mute video, is less accurately decoded by RMDD-As and ADs than by HCs (accuracy is 66% for both groups vs. 92% for HCs) which mostly confused it with a different emotion (17%) and surprise (20%), respectively. Fear is less accurately recognized in the audio by ADs than by HCs (56 vs. 77%). ADs mostly confuse fear with surprise (19%). Anger is less accurately recognized by ADs than by HCs in the audio, and audio/video (52 vs. 78% in the audio; 66 vs. 91% in the audio/video). ADs mostly confuse anger with a different emotion (17% in the audio and 13% in the audio/video). Surprise is less accurately recognized by ADs and RMDD-As than by HCs (52 vs. 84%) in the audio. ADs and RMDD-As mostly confuse surprise with happiness (19% for ADs and 20% for RMDD-As) and a different emotion (11% for ADs and 17% for RMDD-As). In addition, surprise, in the mute video, is the least accurately decoded emotions by all participating groups. There, surprise is mostly confused with anger and a different emotion (see percentages in Table Sadness is less accurately recognized by ADs than by HCs (50 vs. 83%) in the mute video. ADs mostly confuse sadness with fear, surprise, and no emotion (11%).
The goal of this study is to investigate the ability to decode multimodal emotional expressions in outpatients with RMDD. For this purpose, multimodal dynamic stimuli selected from the COST 2102 Italian databases (Esposito et al.,
To the best of our knowledge, this is the first study investigating the capability to decode emotional expressions in RMDD patients also in the compensation phase of the disorder. Indeed, usually, the comparison is made between patients in the acute and remission phase (Joormann and Gotlib,
In our study, the severity of depressive symptoms (scored through the BDI-II questionnaire) does not correlate with the patients’ (RMDD-As, RMDD-Cs, and ADs) emotion recognition task accuracy (see Table
The results reported in this paper indicate that the capability to decode emotional expressions is more impaired in AD than in RMDD patients (either RMDD-As or RMDD-Cs). ADs are, with respect to the other groups, especially impaired in recognizing negative emotions of fear, anger, and sadness, while they perform similar to RMDD-As (see Figure
For the ADs’ performances, there is no support by data reported in literature, since former studies report only comparisons between HCs and MDDs. Since our results indicate that ADs, more than MDDs, are unable to decode emotional expressions, it is worth to hypothesize that this inability is associated with depressive symptoms independently of the causes originating them.
Our results also indicate that RMDD-As are significantly less accurate than HCs in decoding sadness and exhibit a general worst performance for anger and fear, supporting the first hypothesis formulated in the Section “
To explain the ADs worst performance with respect to RMDDs, in decoding negative emotional stimuli, it is worth to consider Beck’s theoretical framework (Beck et al.,
Adjustment disorder with depressed mood and RMDD-As also exhibit a poor decoding of happiness with respect to HCs (see Figure
The RMDD-Cs show slightly superior performances than RMDD-As (as asserted by the second hypothesis formulated in the Section “
Finally, confusion matrices show that, when depressed subjects make errors in the emotion labeling task, they often chose “a different emotion” as an option. Probably, this label is selected either when none of the listed labels fits the patients’ perceived emotional stimulus or when they were not able to identify the portrayed emotion.
Confusion among emotions can also arise because of emotional expression common features depending from the communication mode. For instance, in the mute video, happiness and surprise may have in common the movements of certain facial muscles (Ekman and Friesen,
These results show that depressed subjects do not exhibit a global deficit toward emotional stimuli, rather their performance depends on the specific emotion and (see
Analyzing the subjects’ recognition accuracy through the three communication modes (audio, mute video, and audio/video), independently from the emotional category, it appears that the audio and audio/video are the ones in which all groups make most and least errors (see Figure
When emotion categories are accounted for, it appears that communication modes may affect the subjects’ ability to decode specific emotional states “independently” from the clinical state. This is particularly true for, surprise and fear in the mute video, where all groups perform similarly. Surprise is a quite ambiguous emotion that can assume positive or negative valence, and a mute visual stimulus may not be capable to convey enough information for its correct interpretation (Esposito et al.,
The dependency from the clinical state in the ability of recognizing emotional states explain all the other differences in the emotion recognition task accuracy.
These results are in support to the third hypothesis formulated in the Section “
In this study, we found that depressed subjects have an impairment in the decoding of dynamic emotional expressions (vocal and facial expressions). It can be assumed that this misinterpretation of others’ emotional expressions not only contributes to their interpersonal difficulties but also does not allow them to correctly decode the emotions they experience, further aggravating the depressive symptoms. In addition, it may become more debilitating when subjects have to face stressful and negative events.
Our data are partly consistent with those of other studies that also exploited dynamic stimuli. Indeed, Schneider et al. (
The diverse results discussed earlier are not necessarily contradictory. Differences can be attributed to patients’ characteristics (acute, remission, recurrent, chronic, and so on), their medical status (inpatient, outpatient), pharmacological treatments, the severity of disorder, the stimuli, and methodological paradigms. With respect to the last factor, the distinct stimuli and tasks may involve distinct cognitive processes (i.e., memory, selective attention, and recognition) causing different performances. Given these multiple and different results, it is clear that standardized methodologies and ecological stimuli are necessary to assess the depressed subjects capability to decode others’ emotional expressions.
Our future plans are:
to further investigate depressive disorder’s effects on the emotional information processing by increasing the number of participants; to investigate whether and which dysfunctional cognitive patterns are associated with depression; to check for anxiety effects, since depression is often associated with anxiety (Belzer and Schneier, to test for antidepressant effects, since antidepressants may influence the emotional information processing (Bhagwagar et al.,
All authors listed have made substantial, direct, and intellectual contribution to the work and approved it for publication.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The authors acknowledge for their participation and support nurses, patients, and the following medical doctors from the Mental Health Center of Avellino (ASL Avellino), Italy: Acerra Giuseppe, M.D., Bianco Pietro, M.D., Fina Emilio, M.D., Florio Sergio, M.D., Nappi Giuseppe, M.D., Panella Maria Teresa, M.D., Prizio Domenica, M.D., Tarantino Costantino, M.D., and Tomasetti Antonio, M.D.