Mental health issues have become a significant concern globally, particularly after the COVID-19 pandemic. Early screening and detection of mental health issues are critical. Computer audition offers a non-invasive and efficient approach to recognize mental states (e.g., emotional states, depression) based on sounds such as speech and vocal bursts, facilitating complimentary diagnosis for mental health issues (e.g., bipolar disorder). Moreover, computer audition enables automated screening and monitoring of patients in a natural and unbiased manner, without the constraints of traditional clinical methods. This is particularly beneficial for regions with limited medical resources.
By combining this with other modalities such as images,
biosignals and text, mental health issues can be screened more accurately and effectively. Additionally, with the help of generative models such as GPT (Generative Pre-trained Transformers) and diffusion models, personalized emotional sounds (e.g., speech, music) can be generated to provide support to individuals experiencing mental health issues or requiring companionship. However, building robust and explainable models for recognizing mental states and generating emotional sounds is crucial to ensure privacy and transparency. Techniques such as adversarial training, federated learning, and prototype learning can aid in this regard.
The primary goal of this research topic is to bring together researchers from diverse backgrounds to address the challenges and opportunities in the use of computer audition in mental healthcare. We aim to
explore the latest developments in the field, including the use of advanced machine learning techniques to build robust models, the development of explainable models to support clinical decision-making, and the development of generation models to provide mental support to those in need.
We invite authors to submit original research articles, reviews, and perspectives that address the following themes, but are not limited to:
● Development of robust and/or explainable computer audition models for mental health
contexts, considering application to specific mental disorders
● Multimodal learning (e.g., images, biosignals, text) in the mental healthcare
● Development of new acoustic datasets for mental health
● Acoustic generation models (e.g., GPT models) for mental health support
● Human-computer interaction systems for mental health
● Discussions of fairness, greenness, and other ethical considerations in the use of computer
audition in mental healthcare
● Evaluation of computer audition solutions in mental health contexts
Keywords:
mental health, Computer audition, acoustic generation, trustworthiness, robustness
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.
Mental health issues have become a significant concern globally, particularly after the COVID-19 pandemic. Early screening and detection of mental health issues are critical. Computer audition offers a non-invasive and efficient approach to recognize mental states (e.g., emotional states, depression) based on sounds such as speech and vocal bursts, facilitating complimentary diagnosis for mental health issues (e.g., bipolar disorder). Moreover, computer audition enables automated screening and monitoring of patients in a natural and unbiased manner, without the constraints of traditional clinical methods. This is particularly beneficial for regions with limited medical resources.
By combining this with other modalities such as images,
biosignals and text, mental health issues can be screened more accurately and effectively. Additionally, with the help of generative models such as GPT (Generative Pre-trained Transformers) and diffusion models, personalized emotional sounds (e.g., speech, music) can be generated to provide support to individuals experiencing mental health issues or requiring companionship. However, building robust and explainable models for recognizing mental states and generating emotional sounds is crucial to ensure privacy and transparency. Techniques such as adversarial training, federated learning, and prototype learning can aid in this regard.
The primary goal of this research topic is to bring together researchers from diverse backgrounds to address the challenges and opportunities in the use of computer audition in mental healthcare. We aim to
explore the latest developments in the field, including the use of advanced machine learning techniques to build robust models, the development of explainable models to support clinical decision-making, and the development of generation models to provide mental support to those in need.
We invite authors to submit original research articles, reviews, and perspectives that address the following themes, but are not limited to:
● Development of robust and/or explainable computer audition models for mental health
contexts, considering application to specific mental disorders
● Multimodal learning (e.g., images, biosignals, text) in the mental healthcare
● Development of new acoustic datasets for mental health
● Acoustic generation models (e.g., GPT models) for mental health support
● Human-computer interaction systems for mental health
● Discussions of fairness, greenness, and other ethical considerations in the use of computer
audition in mental healthcare
● Evaluation of computer audition solutions in mental health contexts
Keywords:
mental health, Computer audition, acoustic generation, trustworthiness, robustness
Important Note:
All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.