Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Robot. AI

Sec. Biomedical Robotics

Volume 12 - 2025 | doi: 10.3389/frobt.2025.1462833

This article is part of the Research TopicMedical CybernicsView all 8 articles

Translating Human Information into Robot Tasks: Action Sequence Recognition and Robot Control based on Human Motions

Provisionally accepted
  • 1graduate school of science and technology, university of tsukuba, Tsukuba, Ibaraki, Japan
  • 2Cyberdyne Inc., Tsukuba, Ibaraki, Japan
  • 3Center for Cybernics Research, University of Tsukuba, Tsukuba, Ibaraki, Japan
  • 4Institute of Systems and Information Engineering, University of Tsukuba, Tsukuba, Ibaraki, Japan

The final, formatted version of the article will be published soon.

Long-term use and highly reliable batteries are essential for wearable cyborgs including Hybrid Assistive Limb and wearable vital sensing devices. Consequently, there is ongoing research and development aimed at creating safer next-generation batteries. Researchers, leveraging advanced specialized knowledge and skills, bring products to completion through trial-and-error processes that involve modifying materials, shapes, work protocols, and procedures. When robots can undertake the tedious, repetitive, and attention-demanding tasks currently performed by researchers within facility environments, it will reduce the workload on researchers and ensure reproducibility. In this study, aiming to reduce the workload on researchers and ensure reproducibility in trial-and-error tasks, we proposed and developed a system that collects human motion data, recognizes action sequences, and transfers both physical information (including skeletal coordinates) and task information to a robot. This enables the robot to perform sequential tasks that are traditionally performed by humans. The proposed system employs a non-contact method to acquire three-dimensional skeletal information over time, allowing for quantitative analysis without interfering with sequential tasks. In addition, we developed an action sequence recognition model based on skeletal information and object detection results, which operated independent of background information. This model can adapt to changes in work processes and environments. By translating the human information including the physical and semantic information of a sequential task performed by humans into a robot, the robot can perform the same task. An experiment was conducted to verify this capability using the proposed system. The proposed action sequence recognition method demonstrated high accuracy in recognizing humanperformed tasks with an average Edit score of 95.39 and an average F1@10 score of 0.951. In two out of the four trials, the robot adapted to changes in work processes without misrecognizing action sequences and seamlessly executed the sequential task performed by the human. In conclusion, we confirmed the feasibility of using the proposed system.

Keywords: Cybernics, Action sequence recognition, Long-horizontal task execution, 3D human skeletal information utilization, human robot interaction

Received: 10 Jul 2024; Accepted: 29 May 2025.

Copyright: © 2025 Obinata, Baba, Uehara, Kawamoto and Sankai. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Hiroaki Kawamoto, Institute of Systems and Information Engineering, University of Tsukuba, Tsukuba, Ibaraki, Japan

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.