There is a critical need for noninvasive and reliable neuroimaging biomarkers to facilitate the early detection of neurological and psychological disorders, monitor disease progression, and objectively assess treatment outcomes. Over the past decades, network analysis approaches such as principal component analysis, independent component analysis, and graph theoretical analysis have been instrumental in analyzing functional brain imaging data, including fluorodeoxyglucose positron emission tomography (FDG PET) and resting-state functional MRI (rs-fMRI). Similarly, diffusion tensor imaging, fiber tractography, and connectome-based analysis have been applied to diffusion MRI to study disease-related structural brain networks. While these approaches have significantly advanced the field, their performance can vary due to factors such as data acquisition protocols, collection sites, and differences in scanner vectors.
The recent advent of artificial intelligence (AI), particularly deep learning models, has introduced new opportunities for the study of disease-related brain networks in neuroimaging. These methods uncover patterns that were previously challenging to identify, enabling more precise detection of biomarkers and clinical features. Explainable AI (XAI) frameworks further enhance this process by offering interpretable insights, fostering user trust, and improving understanding of the outputs generated by machine learning algorithms. Despite these advancements, there remains a pressing need to develop robust and unbiased neuroimaging biomarkers across large, multi-center datasets and to establish frameworks that integrate explainability into AI-driven analyses.
This Research Topic seeks to address these challenges by focusing on the integration of deep learning neural networks and XAI in neuroimaging to characterize and validate disease-related networks. By leveraging techniques such as graph theoretical analysis, this initiative aims to explore changes in brain organization, investigate longitudinal disease progression, and assess treatment outcomes. A particular emphasis is placed on transparency and the interpretability of AI-based methods to build trust and confidence among researchers and clinicians.
Identification and validation of imaging biomarkers for neurological and psychiatric disorders.
- Application of XAI to visually represent and interpret disease-related networks.
- Utilization of graph theoretical analysis to explore brain organization in disease-related networks.
- Longitudinal studies of disease progression and their implications for treatment monitoring.
- Comparative studies of various neuroimaging analysis methods, including AI-driven approaches.
- Development of protocols for multi-center neuroimaging studies to ensure consistency and reliability.
- Ethical considerations and strategies to enhance user trust in AI-generated neuroimaging insights.
- Integration of neuroimaging biomarkers into clinical workflows for diagnostic and therapeutic purposes.
Keywords: machine learning, deep learning, explainable artificial intelligence, neuroimaging, network analysis organization
Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.