AUTHOR=Shabber Shaik Mulla , Sumesh Eratt Parameswaran TITLE=AFM signal model for dysarthric speech classification using speech biomarkers JOURNAL=Frontiers in Human Neuroscience VOLUME=Volume 18 - 2024 YEAR=2024 URL=https://www.frontiersin.org/journals/human-neuroscience/articles/10.3389/fnhum.2024.1346297 DOI=10.3389/fnhum.2024.1346297 ISSN=1662-5161 ABSTRACT=Neurological-related speech disorders significantly impair an individual's ability to effectively communicate through speech. This paper's primary objective was to establish a classification system to distinguish between healthy individuals and speech impairments in individuals with dysarthria, a neurological speech disorder characterized by muscle weakness that results in slow, slurred, and less intelligible speech production. The classification of dysarthric speech assumes a pivotal role as a diagnostic tool, enabling accurate differentiation between healthy speech patterns and those affected by dysarthria. Achieving a clear distinction between dysarthric speech and the speech of healthy individuals is made possible through the application of advanced machine learning techniques. In this work, we conducted feature extraction by utilizing the Amplitude and frequency modulated (AFM) signal model, resulting in the generation of a comprehensive array of unique features. A method involving Fourier-Bessel series expansion is employed to separate various components within a complex speech signal into distinct elements. Subsequently, the Discrete Energy Separation Algorithm is utilized to extract essential parameters, namely the Amplitude envelope and Instantaneous frequency, from each component within the speech signal. To ensure the robustness and applicability of our findings, we harnessed data from various sources, including TORGO, UA Speech, and Parkinson datasets. Furthermore, the classifier's performance was evaluated based on multiple measures such as the area under the curve, F1-Score, sensitivity, and accuracy, encompassing KNN, SVM, LDA, NB, and Boosted Tree. Our analyses resulted in classification accuracies ranging from 85\% to 97.8\% and the F1-score ranging between 0.90 and 0.97