AUTHOR=Ahmed Usman , Mukhiya Suresh Kumar , Srivastava Gautam , Lamo Yngve , Lin Jerry Chun-Wei TITLE=Attention-Based Deep Entropy Active Learning Using Lexical Algorithm for Mental Health Treatment JOURNAL=Frontiers in Psychology VOLUME=Volume 12 - 2021 YEAR=2021 URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.642347 DOI=10.3389/fpsyg.2021.642347 ISSN=1664-1078 ABSTRACT=With the increasing prevalence of Internet usage, Internet-Delivered Psychological Treatment (IDPT) has become a valuable tool to develop improved treatments of mental disorders. The task becomes complicated and laborious because of the overlapping emotion in mental health. This paper focuses on the application of personalized mental health intervention using Natural Language Processing (NLP) and attention-based in-depth entropy active learning. The objective of the research is to increase the trainable instance using a semantic clustering mechanism. For this purpose, we propose the method based on synonym expansion by semantic vectors. Semantic vectors based on semantic information derived from the context in which it appears are clustered. The resulting similarity metrics help to select the subset of unlabeled text by using semantic information. The proposed method separates the unlabeled text and includes it in the next active learning mechanism cycle. The method updates the model training by using the new training points. The cycle continues until it reaches the optimal solution, and it converts all the unlabeled text into the training set. Experimental results show that the synonym expansion semantic vectors help enhance training accuracy while not harming the results. The bidirectional LSTM architecture with an attention mechanism achieved the 0.85 ROC on the blind test set. The learned embedding is then used to visualize the activated word's contribution to each symptom and find the psychiatrist's qualitative agreement. The method improves the detection rate of depression symptoms from online forum text using the unlabeled forum texts.