Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Robot. AI

Sec. Robotic Control Systems

Volume 12 - 2025 | doi: 10.3389/frobt.2025.1528754

This article is part of the Research TopicSupervised Autonomy: How to Shape Human-Robot Interaction from the Body to the BrainView all articles

A Multi-User Multi-Robot Multi-Goal Multi-Device Human-Robot Interaction Manipulation Benchmark

Provisionally accepted
  • ARAYA Inc., Tokyo, Japan

The final, formatted version of the article will be published soon.

One weakness of human-robot interaction (HRI) research is the lack of reproducible results, due to the lack of standardised benchmarks. In this work we introduce a multi-user multi-robot multi-goal multi-device manipulation benchmark (M4Bench), a flexible HRI platform in which multiple users can direct either a single—or multiple—simulated robots to perform a multi-goal pick-and-place task. Our software exposes a web-based visual interface, with support for mouse, keyboard, gamepad, eye tracker and electromyograph/electroencephalograph (EMG/EEG) user inputs. It can be further extended using native browser libraries or WebSocket interfaces, allowing researchers to add support for their own devices. We also provide tracking for several HRI metrics, such as task completion and command selection time, enabling quantitative comparisons between different user interfaces and devices. We demonstrate the utility of our benchmark with a user study (n = 50) conducted to compare 5 different input devices, and also compare single-vs. multi-user control. In the pick-and-place task, we found that users performed worse when using the eye tracker + EMG device pair, as compared to mouse + keyboard or gamepad + gamepad, over 4 quantitative metrics (corrected p < 0.001). Our software is available at https://github.com/arayabrain/m4bench.

Keywords: Shared autonomy, human-robot interaction, Multi-agent, multimodal, Benchmark

Received: 15 Nov 2024; Accepted: 01 Sep 2025.

Copyright: © 2025 Yoshida, Dossa, Di Vincenzo, Sujit, Douglas and Arulkumaran. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Kai Arulkumaran, ARAYA Inc., Tokyo, Japan

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.