AUTHOR=Obinata Taichi , Baba Kazutomo , Uehara Akira , Kawamoto Hiroaki , Sankai Yoshiyuki TITLE=Translating human information into robot tasks: action sequence recognition and robot control based on human motions JOURNAL=Frontiers in Robotics and AI VOLUME=Volume 12 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2025.1462833 DOI=10.3389/frobt.2025.1462833 ISSN=2296-9144 ABSTRACT=Long-term use and highly reliable batteries are essential for wearable cyborgs including Hybrid Assistive Limb and wearable vital sensing devices. Consequently, there is ongoing research and development aimed at creating safer next-generation batteries. Researchers, leveraging advanced specialized knowledge and skills, bring products to completion through trial-and-error processes that involve modifying materials, shapes, work protocols, and procedures. When robots can undertake the tedious, repetitive, and attention-demanding tasks currently performed by researchers within facility environments, it will reduce the workload on researchers and ensure reproducibility. In this study, aiming to reduce the workload on researchers and ensure reproducibility in trial-and-error tasks, we proposed and developed a system that collects human motion data, recognizes action sequences, and transfers both physical information (including skeletal coordinates) and task information to a robot. This enables the robot to perform sequential tasks that are traditionally performed by humans. The proposed system employs a non-contact method to acquire three-dimensional skeletal information over time, allowing for quantitative analysis without interfering with sequential tasks. In addition, we developed an action sequence recognition model based on skeletal information and object detection results, which operated independent of background information. This model can adapt to changes in work processes and environments. By translating the human information including the physical and semantic information of a sequential task performed by humans into a robot, the robot can perform the same task. An experiment was conducted to verify this capability using the proposed system. The proposed action sequence recognition method demonstrated high accuracy in recognizing human-performed tasks with an average Edit score of 95.39 and an average F1@10 score of 0.951. In two out of the four trials, the robot adapted to changes in work processes without misrecognizing action sequences and seamlessly executed the sequential task performed by the human. In conclusion, we confirmed the feasibility of using the proposed system.