<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Robotics and AI | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/robotics-and-ai</link>
        <description>RSS Feed for Frontiers in Robotics and AI | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-14T21:18:06.76+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1816301</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1816301</link>
        <title><![CDATA[Morphological symmetry-aware generalized policy network for deep reinforcement learning]]></title>
        <pubdate>2026-05-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ryo Hakoda</author><author>Yubin Liu</author><author>Matthew Hwang</author><author>Yoshihiro Sato</author><author>Jun Takamatsu</author><author>Katsushi Ikeuchi</author><author>Takeshi Oishi</author>
        <description><![CDATA[Exploiting the morphological symmetry of robotic systems, such as humanoid and quadruped robots, is a promising direction for improving robot learning. In deep reinforcement learning (DRL) for robot control, prior studies have leveraged such symmetry to improve learning efficiency through data augmentation, equivariant multilayer perceptrons (EMLPs), and multi-agent reinforcement learning (MARL) formulations. However, DRL training is inherently unstable, as the data distribution strongly depends on exploration, which is driven by stochasticity in the environment. To address this issue, we propose a symmetry-assisted, general-purpose DRL framework for morphologically symmetric robots that enables stable and robust learning. The framework models the environment as a symmetric Markov decision process (MDP) and constructs a full-body policy from a single-sided base policy using symmetry operators. We further propose a symmetric PPO objective with a coupled importance-sampling ratio. This objective aligns the policy optimization process with the imposed symmetry and serves as a principled alternative to MAPPO-style multi-agent formulations. Experimental results demonstrate that the proposed method outperforms existing approaches on most symmetric tasks, while still maintaining performance comparable to or better than standard PPO on asymmetric tasks, where symmetry is less directly exploitable.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1807613</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1807613</link>
        <title><![CDATA[Dual-arm admittance control using conformal geometric algebra]]></title>
        <pubdate>2026-05-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Tobias Löw</author><author>Mariana de Paula Assis Fonseca</author><author>Vitalii Pruks</author><author>Graham Deacon</author><author>Jelizaveta Konstantinova</author><author>Sylvain Calinon</author>
        <description><![CDATA[We propose a task-space admittance controller for dual-arm robotic systems using conformal geometric algebra. The controller is a reinterpretation of a previous work using dual quaternion algebra. By introducing conformal geometric algebra, we aim to enhance the geometric expressiveness, which simplifies the modeling of various tasks and opens doors to more complex applications, such as the modeling of multiple points of contact on the robotic arm in a whole-body manipulation task. We first show the derivation of the controller for a single-arm robot, which is then extended to a dual-arm robot. The closed-loop system is therefore composed of an outer loop admittance controller that imposes the apparent impedance, and an inner loop that transforms the twist acceleration to a control input that is sent to the robot. Experiments executed on a setup with two LBR KUKA iiwa 14 R820 robots with a force/torque sensor in each end-effector show good performance of the proposed controller for both single and dual-arm tasks. Namely, the system was able to reach the desired poses in the absence of external wrenches, while moving in a compliant manner in the presence of external wrenches, adapting the robot’s motion to keep the desired impedance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1861947</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1861947</link>
        <title><![CDATA[Editorial: Reinforcement learning for real-world robot navigation]]></title>
        <pubdate>2026-05-12T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Pengqin Wang</author><author>Xiaocong Li</author><author>Meixin Zhu</author><author>Jun Ma</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1792384</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1792384</link>
        <title><![CDATA[Multi-strategy Sea Horse Optimization algorithm for UAV path planning]]></title>
        <pubdate>2026-05-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Amir Seyyedabbasi</author><author>Bahman Arasteh</author><author>Ahmet Gurhanli</author><author>Jawad Rasheed</author>
        <description><![CDATA[Unmanned aerial vehicle (UAV) path planning is a challenging constrained optimization problem and a key component of autonomous navigation. Traditional optimization techniques frequently encounter difficulties in handling the complex constraints of UAV path planning, and even metaheuristic algorithms may suffer from premature convergence to local optima. A modified variant of the Sea Horse Optimization algorithm (SHO), denoted as moSHO, is introduced for threat-aware UAV path planning. The proposed algorithm extends the original SHO’s movement, predation, and reproduction mechanisms through three cooperative strategies. First, a fish-aggregating device (FAD) mechanism promotes behavioral diversity through adaptive, range-aware perturbations. Second, a best–worst position mutation (BWPM) operator applies fine-grained Gaussian adjustments to the best-performing individuals while simultaneously guiding the worst individuals toward the current best using a differential update with Cauchy perturbation. Third, quasi–reflection-based learning (QRBL) introduces quasi-opposite candidates to strengthen exploration and population diversity. The integration of these strategies strengthens the exploration capability without reducing exploitation, resulting in a more balanced optimization process. An evaluation of 23 benchmark functions demonstrates the robustness of moSHO. Moreover, experiments on the UAV path planning model under threat environments prove its reliability in identifying safe, feasible paths.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1739259</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1739259</link>
        <title><![CDATA[Bridging ancient wisdom and cognitive engagement: a comparative study of chatbot-based moral instruction]]></title>
        <pubdate>2026-05-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sakshi Chauhan</author><author>Varun Dutt</author>
        <description><![CDATA[Convincing learners to engage deeply with complex moral and philosophical concepts remains a major challenge in contemporary learning environments, particularly within increasingly digital educational settings. Although conversational AI offers new possibilities for interactive learning, its potential for supporting ethics education remains underexplored. This study examines the effectiveness of a chatbot-based learning condition compared with a reading condition and a no-intervention control group. Learners’ outcomes were assessed through cognitive tests, self-reported emotional engagement, heart rate variability, and electroencephalographic (EEG) activity. Results showed that both the chatbot and reading conditions improved moral understanding relative to the control group. Emotional engagement was assessed during the chatbot interaction and indicated strong affective involvement among participants. EEG measures suggested increased neural engagement during the instructional conditions, while the reading condition demonstrated higher indices of attentional focus. Both intervention conditions also showed greater physiological engagement than the control group. These findings suggest that conversational AI can serve as a promising interactive tool for supporting moral learning and for facilitating deeper engagement with abstract ethical concepts in contemporary educational contexts.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1857216</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1857216</link>
        <title><![CDATA[Editorial: The impact of robotic technologies on customer experience and adoption]]></title>
        <pubdate>2026-05-11T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Anshu Saxena Arora</author><author>Amit Arora</author><author>John McIntyre</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1785039</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1785039</link>
        <title><![CDATA[Speech-touch integration for affective human–robot interaction: a scoping review]]></title>
        <pubdate>2026-05-08T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Alastair Howcroft</author><author>Maria Elena Giannaccini</author><author>Steve Benford</author><author>Ahmad Khan</author><author>Holly Blake</author>
        <description><![CDATA[BackgroundArtificial intelligence is increasingly capable of expressing empathy through language, yet the integration of physical touch–an important cue for social connection–remains fragmented. Although robots utilise language or touch individually, few systems coordinate both modalities, potentially limiting their capacity for affective human-robot interaction (HRI). This scoping review maps social robots that combine spoken language and tactile interaction (e.g., hugging, stroking, warmth, vibration), examines how these modalities are coordinated in existing systems, and synthesises reported user outcomes and design implications.MethodsFollowing the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) guidelines, searches across five databases (IEEE Xplore, PubMed, ACM, Web of Science, Scopus) and supplementary web sources identified 11 distinct HRI implementations that pair speech with active or invited touch. Of these, eight implementations included explicit comparison conditions (e.g., speech-only vs. speech + touch, or touch-only vs. touch + speech), enabling assessment of the added value of combining modalities.ResultsAcross comparative studies, combining speech and touch showed potential to be more effective than speech-only or touch-only HRI in some contexts. This integration can make robots appear more caring, empathic, and human-like, while strengthening attachment, increasing willingness to self-disclose, and helping users feel calmer (e.g., lower heart rate). However, outcomes were implementation-dependent, with some studies reporting no additional benefit from the combined modalities. Across the evidence base, the review found a consistent suggestive pattern that warm (e.g., near skin temperature), soft, naturalistic touch tends to support more positive affective HRI outcomes than cold, rigid, “mechanical” touch. The evidence base was also largely dominated by short, lab-based studies using existing, typically rigid robotic platforms not purpose-built for affective speech–touch interaction.ConclusionSpeech–touch integration in social HRI is a small but promising area, particularly for healthcare and emotional-support applications (e.g., supporting children in hospital). Despite this potential, very few robots are purpose-built for coordinated speech and touch. Affective speech–touch HRI remains challenging because of its psychological, socio-cultural, and engineering demands. Progress will likely require soft, safe, warm, and increasingly autonomous systems that move beyond repurposed rigid platforms.Systematic Review Registrationhttps://doi.org/10.17605/OSF.IO/2PA6J, identifier OSF.IO/2PA6J.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1733942</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1733942</link>
        <title><![CDATA[From testbeds to high-stakes work: a review of Human-AI teaming domains and teaming factors]]></title>
        <pubdate>2026-05-07T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Shaida Kargarnovin</author><author>Christopher Ivan Hernandez</author><author>Dirk Reiners</author><author>Carolina Cruz-Neira</author><author>Grace Bochenek</author><author>Waldemar Karwowski</author>
        <description><![CDATA[IntroductionHuman-AI teaming is increasingly being studied in applied and high-stakes settings, yet the evidence remains dispersed across domains, constructs, and research traditions. This fragmentation also limits efforts to connect broader human-AI findings to human-robot teaming (HRT), where embodied systems make issues such as coordination, autonomy management, communication, and safety more immediate in real-world interaction.MethodsTo provide a clearer picture of the field, we conducted a PRISMA-guided systematic review with bibliometric analysis of 104 peer-reviewed empirical studies published between 2015 and 2025 and identified through Engineering Village, IEEE Xplore, PubMed, ScienceDirect, and Web of Science.ResultsThe review maps where human-AI teaming has been evaluated and what teaming aspects are most frequently examined. Cross-domain and interdisciplinary studies were the largest category, representing broad workplace or team-based investigations not tied to a single industry and instead focused on general collaboration issues such as communication, teamwork, coordination, and coworker interaction. Gaming and entertainment, aviation, military and defense operations, emergency response and public safety, and healthcare also represented substantial portions of the literature. Across studies, performance was the most frequently examined aspect, followed by trust, explainability and transparency, decision-making, and team processes. Bibliometric patterns suggest a shift since 2020 from foundational demonstrations in controlled settings toward applied, higher-stakes contexts where trust dynamics, communication, and ethical accountability more directly shape adoption and sustained performance.DiscussionEvidence points to a practical conclusion that human-AI teaming works best when the interaction supports coordination, allowing users to form accurate expectations of the AI, adjust autonomy and delegation across task phases, and use transparency cues that calibrate reliance without adding burden. For HRT, these findings reinforce the importance of shared control, mixed-initiative interaction, and designs that help humans and robots coordinate action over time rather than simply divide functions. We conclude by outlining implications for designing and evaluating human-AI teams as socio-technical systems and for prioritizing longitudinal and in-context studies that capture how teaming evolves over time.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1825254</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1825254</link>
        <title><![CDATA[A methodological framework and experimental protocol for proactive human–robot collaboration with multimodal intention prediction and adaptive control]]></title>
        <pubdate>2026-05-07T00:00:00Z</pubdate>
        <category>Hypothesis and Theory</category>
        <author>Juan Escobar-Naranjo</author><author>Carlos García-Ávila</author><author>Ivón O. Benítez-González</author><author>Paúl Baldeón-Egas</author><author>Wilmer Albarracín-Guarochico</author>
        <description><![CDATA[Industry 5.0 requires collaborative robots that can anticipate operator needs to improve fluency and safety in assembly. However, many human–robot collaboration (HRC) systems still treat perception, intention inference, and control as separate components. This study presents a theoretical perception–cognition–action framework that explicitly couples multimodal intention prediction with proactive and adaptive control. Multimodal observations such as RGB-D vision, gaze, wrist force/torque, robot joint state, and previous robot action are encoded by a hybrid Convolutional Neural Network (CNN)– Long Short-Term Memory (LSTM)–Transformer to estimate (i) a probability distribution over future human intentions and (ii) a short-horizon motion trajectory, trained with a composite loss that jointly optimizes classification and regression with kinematic coherence. The predicted intention probability is embedded into an augmented Markov Decision Process state, enabling a Soft Actor–Critic agent to learn continuous policies with rewards designed for synergy, efficiency, safety, and fluency. The main contributions of this study are the formal probabilistic linkage from intention prediction to adaptive control, the definition of a multi-output cognitive objective, and the design of an implementation-ready experimental protocol for future empirical validation. Overall, the proposed methodological framework and experimental protocol provide a reproducible basis for future empirical validation of proactive human–robot collaboration in industrial assembly tasks.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1747157</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1747157</link>
        <title><![CDATA[Passive adaptive grippers: a mini-review]]></title>
        <pubdate>2026-05-07T00:00:00Z</pubdate>
        <category>Mini Review</category>
        <author>Ming Chun Chan</author><author>Rob B. N. Scharff</author>
        <description><![CDATA[Passive adaptive grippers leverage existing degrees of freedom (DOFs) of an external host system such as a robotic arm to complete a manipulation task. These grippers commonly rely on embodied intelligence to achieve this goal, leveraging interaction between the gripper and the environment to trigger prehension, retention, and release of an object. This mini-review establishes a framework for classification of state-of-the-art passive gripper designs across three phases of the gripping procedure: passive prehension (contact-loaded or preloaded), passive retention (externally or internally-sustained), and passive release (contact-based or contactless). Hereby, this work aims to accelerate future research on passive adaptive grippers and provide guidance for application-specific gripper design. Fully passive grippers that simultaneously combine reliable prehension, internally-sustained retention, and contactless release remain scarce. A fundamental trade-off exists between the gripper’s controllability and the host system’s flexibility; optimal gripper design must therefore be tailored to the specific task and operational constraints. Another key challenge is to minimize the force required to be exerted on the object to activate passive prehension. A promising direction towards addressing this challenge is the development of passive preloading mechanisms.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1769678</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1769678</link>
        <title><![CDATA[Enhanced Dynamic Window Approach for socially compliant robot navigation]]></title>
        <pubdate>2026-05-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>S. Ashwath</author><author>R. Mayank</author><author>S. Pavithra</author>
        <description><![CDATA[While contemporary deep learning methods are frequently computationally costly, traditional local planners like the Dynamic Window Approach (DWA) are essentially constrained by their purely geometric, “socially blind” nature. This research introduces Semantic-DWA, a unique, lightweight, and interpretable framework that closes this gap by adding a critical layer of semantic knowledge to the traditional DWA. Our methodology utilizes a perception function to categorize obstacles as “person,” “pet,” or “object” and implements a social disqualification rule that treats class-specific proxemic boundaries as hard constraints. Evaluated in a Python-based 2D simulator, comparative results demonstrated that while the standard DWA led to multiple collisions and proxemic violations, the Semantic-DWA completed all runs with zero collisions, maintaining distinct safe clearances such as 1.00 m for persons and 2.08 m for pets. This study indicates that meaningful social intelligence can be added to proven local planners through minimal extensions, offering a verifiable and predictable solution for safer human-robot coexistence.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1767798</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1767798</link>
        <title><![CDATA[Design, development, and validation of a multimodal synergy-based intuitive virtual and augmented reality therapy platform for mental health]]></title>
        <pubdate>2026-05-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Parthan Olikkal</author><author>Oritsejolomisan Mebaghanje</author><author>Viraj Janeja</author><author>Sruthi Sundharram</author><author>Golnaz Moharrer</author><author>Akshara Ajendla</author><author>Andrea Kleinsmith</author><author>Ann Sofie Clemmensen</author><author>Rajasekhar Anguluri</author><author>Adam Culbreth</author><author>Ramana Vinjamuri</author>
        <description><![CDATA[Embodied therapies such as movement therapy have shown promise in enhancing emotional regulation, cognitive engagement, and physical rehabilitation. However, scalable and personalized delivery of such interventions remains a critical challenge. This work presents SIVAM (Synergy-based Intuitive Virtual and Augmented Mental Health platform), a multimodal system that integrates immersive virtual environments, markerless motion capture, physiological sensing, and humanoid robotic mirroring to support affect-aware interventions for mental health. SIVAM combines RGB camera-based skeletal tracking with EEG, EMG, ECG, GSR, and skin temperature sensing using a wearable dry electrode headset to create a closed-loop therapeutic framework. Movement synergies–low-dimensional coordinated patterns across body joints and muscles–are extracted from motion data and aligned with physiological signals to infer affective and motor states in real time, serving as potential biomarkers of stress. The system further introduces a plane-wise movement model that enables natural 3D avatar navigation using a single RGB camera, enhancing embodiment and interaction with virtual environments. A pilot study (N = 5) with five participants of varying dance experience demonstrated reliable motion tracking, real-time synchronization of physiological and movement data, and robust avatar and robot mirroring across diverse movements. These results highlight the feasibility of combining multimodal sensing, virtual avatars, and socially assistive robots to enable scalable, home-based movement therapy.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1772005</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1772005</link>
        <title><![CDATA[Integrating foundation models with change detection to identify tasking for service robots]]></title>
        <pubdate>2026-04-29T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Eric Martinson</author><author>Igri Fishta</author><author>Devson Butani</author>
        <description><![CDATA[Mobile manipulators show incredible promise as domestic service robots – interacting with a wide variety of objects using increasingly affordable hardware. But although perception, manipulation, and mobility have advanced, there remain fundamental challenges in making robots more useful. How can a robot proactively identify tasks that it can complete while supporting individual human preferences for how a home should be configured? We propose using foundation models to first detect what has changed and then select appropriate tasks for the service robot. Change affords action. Only those objects that have been interacted with need to be considered for tasking. Other objects, even if located in non-standard positions in the house, can be ignored. Open-vocabulary based object detection and neural radiance field models are used to identify changes corresponding to fixed phrases. Large language models then validate which tasks should be completed by the robot. Experiments are conducted on data collected by both mobile phone and Stretch 2 Mobile Manipulator, demonstrating general applicability to a wide range of applications in the home.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1850311</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1850311</link>
        <title><![CDATA[Editorial: AI for design and control of advanced robots]]></title>
        <pubdate>2026-04-28T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Guimin Chen</author><author>Xiaohu Li</author><author>Ke Wu</author><author>Peng Xia</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1772079</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1772079</link>
        <title><![CDATA[Exploring robot-led activities between people living with dementia and family care partners]]></title>
        <pubdate>2026-04-28T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jirachaya Fern Limprayoon</author><author>Debasmita Ghose</author><author>Kayla Matheus</author><author>Paula V. Enriquez</author><author>Michal A. Lewkowicz</author><author>Moon Hwan Kim</author><author>Austin Narcomey</author><author>Natnaree Proud Ua-Arak</author><author>Andy Cheng</author><author>Chayan Sarkar</author><author>Joan K. Monin</author><author>Brian Scassellati</author>
        <description><![CDATA[IntroductionWhile shared activities foster connection between people living with dementia (PLWD) and their care partners, emotional distress and daily caregiving responsibilities often make them difficult to initiate. This paper investigates the adaptation of a socially assistive robot, Ommie, to guide shared deep breathing and singing activities for these pairs.MethodsWe refined the robot’s behaviors through two interaction design sessions with people living with dementia and care partners, mediated by an occupational therapist. In a subsequent study with 17 pairs, participants engaged in deep breathing and singing activities with the robot as well as in-session semi-structured interviews, and we conducted post-hoc video analysis to explore their interactional dynamics.ResultsParticipants reported the interactions as easy to follow, calming, and familiar. Post-hoc video analysis revealed patterns of intimacy and synchrony, including frequent physical touch, mutual gaze, and rhythmic coordination. We also observed instances of personal memory recall and a playful atmosphere, in which pairs often used humor as a coping mechanism after deviations from the robot’s instructions.DiscussionFrom our observations, we discuss three design opportunity spaces: the robot as the focus for synchronization, as an instrument of joint play, and as a source of familiarity versus variety.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1801347</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1801347</link>
        <title><![CDATA[Development of a tendon-driven serial manipulator for an aquatic autonomous surface vehicle]]></title>
        <pubdate>2026-04-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Alaa Khalifa</author><author>Keir Groves</author><author>Joaquin Carrasco</author>
        <description><![CDATA[This paper proposes a two-degree-of-freedom (2-DOF) tendon-driven manipulator to be attached to an aquatic Autonomous Surface Vehicle ASV (MallARD platform is an example). This attachment will expand the ASV’s reachable workspace. It also enables the ASV to perform underwater tasks as well as those performed on the water’s surface. The MallARD DOFs are invested in reducing the DOFs of the proposed manipulator. The actuators for the proposed manipulator are installed in the base, above the water line. Wires are used to transmit the power to the manipulator’s joints. The proposed wire-driven manipulator can work under high radiation, carry a large payload, and be easily isolated from water. The design of the manipulator is described in detail. The inverse kinematics closed-form solution has been derived analytically. A real-world version of the proposed wire-driven manipulator has been successfully manufactured and tested. The experimental setup is constructed utilizing the proposed manipulator. The experimental results display the feasibility of the proposed tendon-driven serial manipulator. They show that the RMS error between the desired and actual values in each joint is less than 2.1o. Hence, the proposed wire-driven manipulator can be utilized for underwater applications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1771992</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1771992</link>
        <title><![CDATA[A platform for investigating prompt framing as interface parameters in foundation models for robotics]]></title>
        <pubdate>2026-04-22T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Anup Tuladhar</author><author>Eli Kinney-Lang</author>
        <description><![CDATA[Foundation models, in particular large language models (LLMs), are finding increasing popularity when used in describing goals for robotic control, decision making, and execution. Recently, proposals for hybrid paradigms leveraging strengths of reinforcement learning (RL) agents in tandem with LLMs for robotic control have been demonstrated. The interface between the RL agents and the language model however offers a unique opportunity to explore how prompt framing may affect such hybrid systems. This work presents a controlled experimental platform to measure and better understand how manipulation of the interface between RL agents and an LLM impacts behaviour of a hybrid advisor-arbiter architecture. We compared three agents under matched evaluation protocols and initializations in a simulated navigation environment: (i) RL-only tabular Q-learning; (ii) LLM-only (stateless) action selection; and (iii) a hybrid LLM + RL agent. Under a constrained interaction budget (10 episodes per world), the hybrid LLM + RL agent achieves higher mean success and higher mean cumulative reward than both RL-only and LLM-only baselines. Advisor-channel ablations (random recommendations and null recommendations) reduce performance, indicating that structured advice contributes beyond adding extra text. We further demonstrate prompt framing as a controlled factor by evaluating navigation-role personas, narrative personas, and relational variants of a caregiver prompt under matched conditions, yielding heterogeneous effects across framings. The contribution of this work is to provide a structured testbed and evaluation approach for investigating the impact of prompt framing on multi-step decision making and control tasks.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1793138</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1793138</link>
        <title><![CDATA[Digital transformation in restaurants: key aspects of service robot deployment from project initiation to evaluation]]></title>
        <pubdate>2026-04-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Anniken Susanne T. Karlsen</author><author>Bjørn Andersen</author><author>Solvår Elverum Heirsaunet</author><author>Elin Indergård</author><author>Kristina Nevstad</author><author>Wenche Aarseth</author>
        <description><![CDATA[This study examines the deployment of service robots designed to support waitstaff in food delivery within Norwegian restaurants, which are national pioneers in adopting this technology. It investigates key aspects of service robot deployment, from project initiation through evaluation, and the approach used to ensure that robot functionality aligns with restaurant workflows and spatial configurations. Using an exploratory–explanatory case study design, the research draws on 22 interviews with 34 participants, complemented by observational fieldwork to strengthen contextual understanding. The findings offer an integrated view of the digital transformation process, identifying important considerations across all project phases. One example is the importance of considering incorporating robot service needs into early facility planning. By addressing potential obstacles such as stairs and doorsteps during the front-end design, restaurants can avoid costly redesigns and ensure optimal robot performance. By examining real-world deployment, the study offers practical insights for project managers, hospitality leaders, and others preparing to integrate service robots into restaurant operations.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1778864</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1778864</link>
        <title><![CDATA[Evaluation of material effects on three-dimensional cultured skeletal muscle cells for biohybrid robots]]></title>
        <pubdate>2026-04-21T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Hirono Ohashi</author><author>Shunsuke Shigaki</author><author>Seita Fujii</author><author>Masahiro Shimizu</author><author>Koh Hosoda</author>
        <description><![CDATA[Robots are traditionally confined to controlled environments such as factories, where human interactions are limited. However, the demand for robots that are capable of collaborating with humans is increasing. To achieve symbiosis, integrating the physical flexibility and environmental adaptability of living organisms into robotic systems is crucial. An example of such a robot is a biohybrid robot driven by three-dimensional (3D) cultured skeletal muscle cells. These muscle cells, which are composed of myoblasts and an extracellular matrix (ECM), contract and generate force in response to external stimuli. The standardization of such 3D-cultured skeletal muscle cells is essential for practical applications. However, their complete standardization has not yet been achieved. The contractile force of 3D-cultured skeletal muscle cells produced via 3D printing is still insufficient for practical applications as actuators in biohybrid robots. In a previous study, we developed a simple fabrication method for 3D-cultured skeletal muscle cells. These bio-cultured artificial muscle (BiCAM) cells can control the shape and cell alignment of tissues. Differences in the composition of an ECM have been suggested to affect the contractile force of 3D skeletal muscle tissues; however, their impact on the response characteristics remains poorly understood. In this study, we investigated how the ECM composition influences the contractile force of 3D skeletal muscle cells in biohybrid robots as a step toward their eventual standardization. Compared with tissues cultured under MF conditions, in which electrically induced contraction was previously confirmed, tissues cultured under CM conditions exhibited an approximately two-fold greater contractile force at voltage amplitudes of 10 and 30 V. Furthermore, the fabrication success rate was 100% under CM conditions but only 62.5–70% under other ECM conditions. In contrast, although CM tissues generated larger forces, tissues cultured under MgF and CMg conditions exhibited higher-frequency response. These findings demonstrated that the BiCAM is a viable actuator and offers new possibilities for the design of biohybrid robots.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frobt.2026.1766383</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frobt.2026.1766383</link>
        <title><![CDATA[Multi-party open-ended conversation with a social robot ]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Giulio Antonio Abbo</author><author>Maria Jose Pinto-Bernal</author><author>Martijn Catrycke</author><author>Tony Belpaeme</author>
        <description><![CDATA[Multi-party open-ended conversation remains a major challenge in human–robot interaction, particularly when robots must recognise speakers, allocate turns, and respond coherently under overlapping or rapidly shifting dialogue. This paper presents a multi-party conversational system that combines multimodal perception (voice direction of arrival, speaker diarisation, face recognition) with a large language model for response generation. Implemented on the Furhat robot, the system was evaluated with 30 participants across two scenarios: (i) parallel, separate conversations and (ii) shared group discussion. Results show that the system maintains coherent and engaging conversations, achieving high addressee accuracy in parallel settings (92.6%) and strong face recognition reliability (80–94%). Participants reported clear social presence and positive engagement, although technical barriers such as audio-based speaker recognition errors and response latency affected the fluidity of group interactions. The results highlight both the promise and limitations of LLM-based multi-party interaction and outline concrete directions for improving multimodal cue integration and responsiveness in future social robots.]]></description>
      </item>
      </channel>
    </rss>