AUTHOR=Kurban Zhandos , Khassenov Didar , Burkitbaev Zhandos , Bulekbayeva Sholpan , Chinaliyev Azat , Bakhtiyar Serik , Saparbayev Samat , Sultanaliyev Tokan , Zhunissova Ulzhalgas , Slivkina Natalia , Titskaya Elena , Arias Luis , Aldakuatova Dana , Yessenbayeva Gulfairus , Ermakhan Zhanerke TITLE=Artificial intelligence–enhanced mapping of the international classification of functioning, disability and health via a mobile app: a randomized controlled trial JOURNAL=Frontiers in Public Health VOLUME=Volume 13 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2025.1590401 DOI=10.3389/fpubh.2025.1590401 ISSN=2296-2565 ABSTRACT=BackgroundMobile health applications and artificial intelligence (AI) are increasingly utilized to streamline clinical workflows and support functional assessment. The International Classification of Functioning, Disability and Health (ICF) provides a standardized framework for evaluating patient functioning, yet AI-driven ICF mapping tools remain underexplored in routine clinical settings.ObjectiveThis study aimed to evaluate the efficiency and accuracy of the MedQuest mobile application—featuring integrated AI-based ICF mapping—compared to traditional paper-based assessment in hospitalized patients.MethodsA parallel-group randomized controlled trial was conducted in two medical centers in Astana, Kazakhstan. A total of 185 adult inpatients (≥18 years) were randomized to either a control group using paper questionnaires or an experimental group using the MedQuest app. Both groups completed identical standardized assessments (SF-12, IPAQ, VAS, Barthel Index, MRC scale). The co-primary outcomes were (1) total questionnaire completion time and (2) agreement between AI-generated and clinician-generated ICF mappings, assessed using quadratic weighted kappa. Secondary outcomes included AI sensitivity/specificity, confusion matrix analysis, and physician usability ratings via the System Usability Scale (SUS).ResultsThe experimental group completed questionnaires significantly faster than the control group (median 18 vs. 28 min, p < 0.001). Agreement between AI- and clinician-generated ICF mappings was substantial (κ = 0.842), with 80.6% of qualifiers matching exactly. The AI demonstrated high sensitivity and specificity for common functional domains (e.g., codes 1–2), though performance decreased for rare qualifiers. The micro-averaged sensitivity and specificity were 0.806 and 0.952, respectively. Mean SUS score among physicians was 86.8, indicating excellent usability and acceptability.ConclusionThe MedQuest mobile application significantly improved workflow efficiency and demonstrated strong concordance between AI- and clinician-assigned ICF mappings. These findings support the feasibility of integrating AI-assisted tools into routine clinical documentation. A hybrid model, combining AI automation with clinician oversight, may enhance accuracy and reduce documentation burden in time-constrained healthcare environments.Trial registrationClinicalTrials.gov, identifier NCT07021781.