Enhancing robotic dexterity through advanced learning and multimodal perception

  • 1,105

    Total views and downloads

About this Research Topic

Submission deadlines

  1. Manuscript Submission Deadline 17 March 2026

  2. This Research Topic is currently accepting articles.

Background

Robotic manipulation is a dynamic field at the forefront of automation and artificial intelligence. Despite advances, challenges persist in ensuring robots can generalize across tasks, adapt to new environments, and collaborate with humans effectively. A notable limitation is the ability to perform contact-rich, fine-grained, and high-precision manipulation, crucial for real-world applications. Recent studies highlight a need for learning-based approaches, multimodal perception, and hierarchical planning strategies to overcome these barriers, aiming to improve dexterity and autonomous capabilities. Progressive efforts in the field have seen the rise of foundation models and visual-language-action models that integrate perception and action, allowing for nuanced manipulation tasks.

This Research Topic aims to amplify understanding and foster innovations in robotic dexterity. The primary goal is to explore the integration of reinforcement learning, intelligent perception, and pre-trained large-scale models to enhance robots' adaptability, task comprehension, and precision. Key objectives include advancing algorithms, hardware solutions, and applications that test the current dexterity limits. We encourage engagement among roboticists, AI researchers, and automation specialists, striving towards creating dexterous robots that can reliably perform complex tasks, alone or in collaboration with humans, across diverse fields such as manufacturing, healthcare, and service robotics.

To gather further insights into the advanced learning approaches in robotic dexterity, we welcome articles addressing, but not limited to, the following themes:

Foundation models and vision-language-action frameworks for enhancing robotic perception and decision-making;

Innovative reinforcement learning and imitation learning algorithms for mastering complex, high-dimensional manipulation tasks;

Multimodal sensory integration, encompassing visual, tactile, and force feedback, for robust and adaptive interaction;

Task and motion planning strategies aimed at achieving precise and reliable dexterous manipulation;

Adaptive and compliant control techniques, such as variable impedance control, for safe and flexible object handling;

Human-robot collaboration and shared autonomy, enabling intuitive and efficient teaming between robots and human partners;

Real-world deployment potential across industrial automation, healthcare, and assistive robotics domains.

We invite original research, review articles, and application-driven studies from academia, industry, and research labs, focusing on theoretical advancements and practical applications in AI-driven dexterous robotics.

Article types and fees

This Research Topic accepts the following article types, unless otherwise specified in the Research Topic description:

  • Brief Research Report
  • Data Report
  • Editorial
  • FAIR² Data
  • FAIR² DATA Direct Submission
  • General Commentary
  • Hypothesis and Theory
  • Methods
  • Mini Review

Articles that are accepted for publication by our external editors following rigorous peer review incur a publishing fee charged to Authors, institutions, or funders.

Keywords: Dexterous Robotic Manipulation; Robot learning; Mechanics and control; Foundation models for robotics; visual- language-action

Important note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic editors

Topic coordinators

Manuscripts can be submitted to this Research Topic via the main journal or any other participating journal.

Impact

  • 1,105Topic views
View impact