Your new experience awaits. Try the new design now and help us make it even better

EDITORIAL article

Front. Robot. AI, 23 July 2025

Sec. Computational Intelligence in Robotics

Volume 12 - 2025 | https://doi.org/10.3389/frobt.2025.1662674

This article is part of the Research TopicMerging Symbolic and Data-Driven AI for Robot AutonomyView all 11 articles

Editorial: Merging symbolic and data-driven AI for robot autonomy

  • 1Department of Computer Science, University of Verona, Verona, Italy
  • 2School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
  • 3Department of Mathematics and Computer Science (DeMaCS), University of Calabria, Arcavacata di Rende, Italy
  • 4National Centre of Scientific Research “Demokritos”, Agia Paraskevi, Greece

Robots are increasingly being deployed to assist humans in many applications such as medicine, navigation, and industrial automation. To truly collaborate with humans in complex environments, robots require advanced cognitive capabilities, including the ability to reason with domain-specific commonsense knowledge and the noisy observations obtained in the presence of partial observability and non-deterministic action outcomes. Research in Artificial Intelligence (AI) has resulted in sophisticated symbolic formalisms based on logics to represent commonsense domain knowledge, as well as probabilistic and data-driven frameworks that quantitatively represent uncertainty in the decisions made by robots.

By themselves, symbolic or stochastic AI methods have limitations when applied to robots in complex scenarios. Symbolic AI methods reason with relational descriptions of the attributes of the domain and the robot to guide the robot’s behavior. At the same time, they tend to require extensive prior knowledge about the domain and the robot. They also make it computationally expensive to operate at the level of granularity required for precise interaction with the physical world, or to reason about uncertainty quantitatively. Probabilistic and data-driven AI methods, on the other hand, elegantly represent uncertainty quantitatively, and provide mechanisms for reasoning and acting at the level of granularity required for interaction with the physical world. These methods, however, offer limited expressiveness for complex cognitive concepts, and it is not always meaningful to reason about uncertainty quantitatively. With the increasing use of AI and robots in different applications, there has been renewed interest in hybrid and neurosymbolic AI frameworks that combine symbolic and data-driven methods. The 10 contributions in this Research Topic highlight the promise and potential of such frameworks in the context of robotics.

Describing a vision for the future, Spasokukotskiy states that the next-generation AI systems should not only be endowed with autonomy but also “morality” that secures alignment in large systems, i.e., they should operate safely within the values of human society. Instead of being in full control of AI, humans would then cooperate and communicate with intelligent systems. Extending this idea, Pal explains the relevance of transparency, explainability, learning from a few examples, and the trustworthiness of an AI system, exploring how insights into human reasoning can be a crucial ingredient for achieving reliable operation with embodied AI systems. In addition, Toberg et al. provide a systematic review of robot systems that represent, reason with, learn, and/or use commonsense knowledge in a wide range of application domains. Symbolic AI methods can play a crucial role in the design of such AI/robotics systems, providing the expressivity for elegantly representing human-level concepts and effectively modeling logical reasoning capabilities. These methods can also support more efficient and transparent learning, and the use of human guidance to generate symbolic abstractions. Das et al. describe a framework that extends an inductive logic program learner to demonstrate this capability on multiple benchmark domains, one of which focuses on planning the assembly of mechanical structures, a core task in industrial automation.

In addition to reasoning with prior knowledge that includes cognitive theories, robots that interact with the physical world process a large amount of continuous multi-modal inputs from different information sources, including humans and other agents. In this context, data-driven AI methods, particularly recent advancements in deep learning, have exhibited groundbreaking performance and established themselves as the state of the art for problems in computer vision, natural language processing, and complex decision-making. For example, Mitrokhin et al. describe a hybrid framework for image-based context awareness, training a hash neural network on images to show that hyperdimensional vectors can be constructed such that vector-symbolic inference arises naturally out of their output. This enhances the robustness and explainability of the classification process, achieving state-of-the-art accuracy on real-world image datasets such as the popular CIFAR-10.

Acquiring symbol abstractions from raw continuous inputs, i.e., symbol grounding, and decision-making become particularly challenging with the high-dimensional inputs received by robots. Despite the impressive results achieved by deep neural networks and foundation models, their direct use in robots becomes inefficient, hinders transparency, and provides arbitrary responses in novel situations. Hybrid frameworks can address these limitations by leveraging the complementary strengths of symbolic and data-driven AI systems. For example, the framework of Nevens et al. uses symbolic AI to enable an agent to construct a conceptual system in which meaningful concepts are formed based on human-interpretable feature channels. They use a dataset of images for manipulating blocks to illustrate how concepts acquired from limited data points can be combined and generalized to unseen instances. Sasaki et al. show that grounding robotic gestures with quantitative meaning calculated from word-distributed representations constructed from a large corpus of text enable robots to display behavior that humans perceive to be natural. Riley et al. describe a framework that supports non-monotonic logical reasoning with abstractions of prior commonsense knowledge and information extracted by deep neural networks from relevant image regions; they show substantial performance improvement compared with state of the art for visual question answering, and vision-based planning and diagnostics. Furthermore, Grosvenor et al. and Ghiasvand et al. document examples of real-world integration of similar ideas in the context of knowledge-enhanced deep visual tracking of satellites, and a comprehensive architecture for space robotic mission planning and control, respectively.

In summary, the contributions to this topic highlight the importance of merging symbolic and data-driven AI methods in the context of robotics (and AI). These papers demonstrate how such hybrid frameworks enable robots to reason with complex cognitive theories and noisy multimodal sensor observations to achieve reliable, efficient, and transparent scene understanding, planning, diagnostics, and human-robot collaboration in complex simulated and physical domains. The papers also draw attention to the fundamental open problems that need to be addressed to leverage the full potential of robots in practical applications. We hope that these papers will foster further collaboration between the related research communities toward achieving societal benefits.

Author contributions

DM: Writing – original draft. MS: Writing – review and editing. SP: Writing – review and editing. NK: Writing – review and editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Keywords: neurosymbolic AI, probabilistic reasoning, reasoning under uncertainty, hybrid AI, robotics

Citation: Meli D, Sridharan M, Perri S and Katzouris N (2025) Editorial: Merging symbolic and data-driven AI for robot autonomy. Front. Robot. AI 12:1662674. doi: 10.3389/frobt.2025.1662674

Received: 09 July 2025; Accepted: 15 July 2025;
Published: 23 July 2025.

Edited and reviewed by:

Chenguang Yang, University of Liverpool, United Kingdom

Copyright © 2025 Meli, Sridharan, Perri and Katzouris. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Daniele Meli, ZGFuaWVsZS5tZWxpQHVuaXZyLml0

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.