ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. AI for Human Learning and Behavior Change
Volume 8 - 2025 | doi: 10.3389/frai.2025.1611534
This article is part of the Research TopicAI Innovations in Education: Adaptive Learning and BeyondView all 13 articles
Artificial intelligence-enhanced assessment of fundamental motor skills: Validity and reliability of the FUS test for jumping rope performance
Provisionally accepted- 1Faculty of Physical Education and Health, Józef Piłsudski University of Physical Education in Warsaw, Warsaw, Poland
- 2Department of Kinesiology, Recreation, and Sport Studies, The University of Tennessee, Knoxville, Knoxville, Tennessee, United States
- 3Faculty of Physical Education, Józef Piłsudski University of Physical Education in Warsaw, Warsaw, Poland
- 4Artificial Intelligence Department, DG Consulting, Wrocław, Poland
- 5Department of Research in Artificial Intelligence, Instat sp. z o.o, Wrocław, Poland
- 6Faculty of Rehabilitation, Józef Piłsudski University of Physical Education in Warsaw, Warsaw, Poland
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Widespread concerns about children’s low fundamental motor skill (FMS) proficiency highlight the need for accurate assessment tools to support structured instruction. This study examined the validity and reliability of an AI-enhanced methodology for assessing jumping rope performance within the Fundamental Motor Skills in Sport (FUS) test. A total of 236 participants (126 primary school students aged 7-14 and 110 university sports students aged 20-21) completed jumping rope tasks recorded via the FUS mobile app integrated with an AI model evaluating five process-oriented performance criteria. Concurrent validity and inter-rater reliability were examined by comparing AI-generated assessments with scores from two expert evaluators. Intra-rater reliability was also assessed through reassessment of video trials after a three-week interval. Results revealed excellent concurrent validity and inter-rater reliability for the AI model compared with expert ratings (ICC = 0.96; weighted kappa = 0.87). Agreement on individual criteria was similarly high (Cohen’s kappa = 0.83-0.87). Expert-adjusted AI scores further improved reliability (ICC = 0.98). Intra-rater reliability was also excellent, with perfect agreement for AI-generated scores (ICC = 1.00; kappa = 1.00). These findings demonstrate that AI-based assessment offers objective, reliable, and scalable evaluation, enhancing the accuracy and efficiency of FMS assessment in education and research.
Keywords: motor competence, Fundamental movement skills, machine learning, Mobile application, Physical Education
Received: 14 Apr 2025; Accepted: 21 Jul 2025.
Copyright: © 2025 Makaruk, Porter, Webster, Makaruk, Tomaszewski, Nogal, Gawłowski, Sobański, Molik and Sadowski. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Hubert Makaruk, Faculty of Physical Education and Health, Józef Piłsudski University of Physical Education in Warsaw, Warsaw, Poland
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.