About this Research Topic
Millions of people worldwide are affected by mental disorders. Examples include depression, bipolar disorder, obsessive-compulsive disorder, autism spectrum disorder, schizophrenia, and related neurological diseases, such as Parkinson’s and Alzheimer’s. Reliable assessment, monitoring, and evaluation are important to identify individuals in need of treatment, evaluate treatment response, and achieve remission or moderate impact. Many indicators of presence or severity of mental disorders are observable. Indicators include psychomotor agitation (inability to sit still, pacing, hand wringing) or retardation (slowed speech and body movements, speech that is decreased in volume or vocal quality), changes in facial expression, gaze, body movements, and cognition. As mental disorders have a heterogenous nature, there are many possible combinations of such indicators.
Attempts at diagnosis, screening and evaluation of treatment response from behavioral indicators to date have focused primarily on the individual alone and individual modalities. Yet, disorders are multimodal and heterogeneous, changes in facial movements, body movements, gestures, speech activity can be observed in many combinations. Moreover, disorders strongly impact social interaction and relationships in family members, work settings, and on social media. For these reasons, it is critical to use multimodal indicators in a variety of social contexts.
This Research Topic aims to investigate how the advances in computer vision, signal processing, and machine learning (especially deep learning) contribute to automated diagnosis, monitoring, and treatment of mental disorders especially in interpersonal contexts. Conventional machine learning approaches, convolutional neural networks (CNNs) or recurrent neural networks (RNNs) could be used for multimodal investigation of mental disorders in several contexts. Emerging machine learning techniques including attention-based approaches for multimodal data fusion, transformers for speech and language processing, vision transformers for facial representation, generative adversarial approaches for data augmentation, and adversarial training for domain transfer and cross-domain generalizability could be explored within the context of multimodal investigation of psychopathology.
We are soliciting original contributions that address advancements and challenges in multimodal approaches for automated assessment, monitoring, and treatment of psychopathology including but not limited to the following topics:
• Multimodal behavioral indicators of psychopathology occurrence and severity, especially those concerned with change over time
• Speech and language processing for psychopathology
• Wearable sensors for monitoring psychopathology
• Assessment or monitoring of psychopathology (detection of depression, obsessive-compulsive disorder, dementia, autism, suicidal ideation or behavior, and other conditions)
• Evaluation of treatment response
• Interpersonal indicators and mechanisms
• Patient-clinician interaction
• Family interaction
• Group therapy
• Human interaction on social media (e.g., detection of early signals of psychopathology, suicidal behavior)
Topic Editor Jeffrey Cohn is cofounder of Deliberate, member of board of advisors of Embodied and RealEyes, and has 3 U.S. Patents (numbers 9799096, 10335045B2, 10,540,678).
The other Topic Editors do not declare any potential conflict of interests.
Keywords: multimodal interaction in psychopathology, diagnosis and monitoring of mental disorders, evaluation of treatment response, multimodal data fusion, deep learning
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.