About this Research Topic
A unique virtual roundtable was held on 5th November 2021 to accompany the Research Topic to critically examine the above themes while drawing on the considerable expertise of the Topic Editors and participants.
If you missed it, do please visit this link to wait the YouTube recording: https://fro.ntiers.in/NGIYT
The Research Topic is led and was discussed in the roundtable by:
Karl Friston (Professor of Neuroscience, University College London): Free Energy Principle (FEP) and Bayesian Brain models, Predictive Coding and Deep Learning ANNs, Novelty exploration and FEP.
Georg Northoff (Professor of Neuroscience and Mental Health, University of Ottawa Institute of Mental Health Research): Self-referential brain processes; Multi-modal integration of self and other; ‘Nothing for Free’ in General AI.
Tony Prescott (Professor of Cognitive Robotics, University of Sheffield): Embodied Cognition, Sense of Self for intelligence and robot design, Bio-inspired Robotics, AI, agency and ethical issues.
Emily Cross (Professor of Social Robotics, University of Glasgow and Macquarie University): Social robots; Neuroscience of motor-physical social interaction and coordination, Theory of mind and human-robot interaction.
Sheri Markose* (Professor of Economics, University of Essex): Complex Adaptive Systems, Evolvability and viral software-based transposons in genomic intelligence, Offline ‘Self-Ref’/‘Self-Rep’ mirror systems for self-other nexus, Strategic novelty and Gödel Incompleteness.
*Sheri Markose (Associate Editor) moderated the Roundtable.
--
The remarkable success of Statistical AI (SAI) with Deep Learning and Artificial Neural Networks (ANNs) marks the current AI scene. The conditions that bring about the success of narrow AI militate against general intelligence. These include the non-trivial problem of extrapolating from given input data; curated environments that abstract from ‘in the wild’ circumstances with adversarial software agents that can hack and fake; narrowness of objectives and rewards that can lead to brittle outcomes and ‘bad robot’ problems. As is increasingly being understood, adaptability that goes beyond the standard optimization model with prespecified choice sets requires complex external control over ANNs and their plasticity. This limits their autonomy and capacity for novelty production. Most of all, models of optimization in SAI manifest a near absence of principles of decentralized control such as that of distributed ledger technologies, which can enhance robustness, trustworthiness and security of black box AI outputs in ensembles of AI agents.
It is a long-held view that AGI aims to emulate the human brain which marks the apogee as a prototype for general intelligence. The offline embodied sentient self is known to be the basis of an empathic Theory of Mind and higher-order human cognition with open-ended search for adaptive homeostasis for preserving somatic identity. This runs into orders of magnitude of 1015 – 1030 that exceed the germline genome size many times over. This self-referential information processing, however, comes at a price of autoimmune disease and neuropsychiatric illness relating to the self-other nexus. Hence, our premise is that ‘nothing is for free' in general intelligence.
Molecular and evolutionary biology in the post-Barbara McClintock era has identified the role of viral-based transposable elements (TEs), that scissor paste and copy-paste, for genomic evolvability (45% of the human genome) and real-time somatic neural plasticity. However, TEs have to be kept in check for potential malign activity. This neurobiology for evolvability, as indicated, like the regulatory principles behind selection of status quo and adaptation resulting in extended phenotypes, often manipulative of others, is yet to be fully understood.
Just as the success of SAI need not be marred by a lack of evidence for whether the brain does backpropagation, AI that recognizes affective states in humans need not ‘feel’ the same. The big push in AGI has been in neuro-robotics which necessarily involves multi-modal embodied integration of the sensorimotor pathways of the body schema often mimicking principles from grid-like mappings found in the hypothalamus for spatial-temporal memory and navigation. Brain-AI interface can now harness commands from the brain to operate devices outside it as in neural prosthetics. Thus, computational neuroscience behind brain scanning and mind-reading has an exciting future. But with that potential ethical problems will emanate from involuntary mind control and bio-digital malware.
This Research Topic aims to gather a series of original research articles, mini-reviews, reviews and novel perspectives covering, but not limited to, the following aspects of AGI :
• Challenges for Artificial General Intelligence
• Novelty exploration and Free Energy Principle
• Embodied Multi-modal Integration of Self-Other-Environment in Robots
• Self-reference, Self-Representation and 3-D Self-Assembly of digitized bio-materials
• Moravec's paradox
• Evolution without Objectives and Open-ended Novelty Search
• Deepfakes and Generative Adversarial Networks (GANs)
• Role of adversarial bio-software like transposons for evolvable intelligence
• Neuro-memetic AI and Robots
• Social Robots
• Theory of mind and human-robot interaction
• Bad Robot problem
• Distributed Ledger Technologies for genomic regulatory networks and AI systems
• Brain-AI -Machine Interface: Neural Prosthetics and Neural hacking
• Ethical and regulatory issues from neural AI and Robots
Keywords: Artificial General Intelligence, Self-referential Embodied Social Cognition, Multi-modal sensorimotor integration, Moravec’s Paradox, Neurorobotics, Rewards and Autonomy
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.