Frontiers reaches 6.4 on Journal Impact Factors

Opinion ARTICLE Provisionally accepted The full-text will be published soon. Notify me

Front. Robot. AI | doi: 10.3389/frobt.2018.00017

Opinion: Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware

  • 1Computing, Goldsmiths, University of London, United Kingdom

Recent articles from Schneider & Turner [6, 7] outline an Artificial Con- sciousness Test (ACT); a new, purely behavioural process to probe subjective experience (‘phenomenal consciousness’: tickles, pains, visual experiences and so on) in machines; work which has already resulted in a provisional patent application from Princeton University [9]. In light of this, considera- tion is given to the claimed sufficiency of ACT to determine the phenomenal status of an Artificial Intelligence (AI) system.
In Science and Science Fiction the hope is periodically reignited that a computer system will one day be conscious in virtue of its execution of an ap- propriate program; indeed in 2004 the UK funding body EPSRC awarded a substantial ‘Adventure Fund’ grant [GR/S47946/01] of around £500,000, to a team of ‘Roboteers and Psychologists’ at the Universities of Essex and Bris- tol, with a goal of instantiating ‘machine consciousness’ in a humanoid-like robot called Cronos. In addition, claims of ‘machine consciousness’ have long been been reported in the literature (e.g. In 2002 Kevin Warwick announced his ‘Cybernetic learning robots’ to be “as conscious as a slug” [10]).
Other proposal for conscious machines have ranged from the mere ‘func- tional consciousness’ of Stan Franklin’s ‘Intelligent Distribution Agent’ [4] to the claim of ‘true conscious cognition’ of [Pentti] ‘Haikonen’s Cognitivist Architecture’ (HCA); an architecture which seeks to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. Haikonen has asserted that, when imple- mented with sufficient complexity, HCA will develop consciousness [5].
It is in this febrile atmosphere that Schneider and Turner [7] highlight the importance of a test to ascertain machine consciousness; as: (i) it may be deemed morally improper to oblige such machines to ‘serve’ humans; (ii) it could raise safety concerns and (iii) it could impact on the viability of brain- implant technologies. Hence, given the impact of an ACT result that ascribes consciousness to machine, it is critical that it is both robust and accurate; in this context Schneider and Turner explicitly clarify (ibid.) that passing ACT “.. is sufficient but not necessary evidence for AI consciousness”.
Given that one of the most forceful indications that humans experience consciousness is that every adult can readily and quickly grasp concepts based on this quality, Schneider and Turner describe their ACT as follows:
“[T]he ACT would challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness. At the most elementary level we might simply ask the machine if it conceives of itself as anything other than its physical self. At a more advanced level, we might see how it deals with ideas and scenarios such as those mentioned in the previous paragraph. At an advanced level, its ability to reason about and discuss philosophical questions such as ‘the hard problem of consciousness’ would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.”
Turner and Schneider claim (ibid.) that the above procedure is suffi- cient to establish consciousness in any ‘boxed-in’ AI system (i.e. any AI not connected to the internet); any AI that passes ACT will be conscious. Fur- thermore, Schneider clarified (personal communication [PTAI Leeds, 2017]) that the test is robust to repeated probes using the same set of questions.
Thus, if machine M, given question set A, responds in such a way A* that it is deemed to have passed ACT (and consciousness is ascribed to M), then, if posing the identical question set A to a second machine M*, generates identical responses A*, then M* must also be deemed to have passed ACT.
So construed, an unintended consequence of the above is that any machine M**, programmed explicitly to simply to respond to question set A with responses A*, must be deemed to pass ACT; indeed, as the test is explicitly behaviourist in conception, one might further imagine, after Block [2], that were M** responses merely generated by a suitably large ‘look-up table’, M** would still qualify as ‘passing’ ACT.
For these reasons, unless we are content to ascribe consciousness sensation to a mere look-up table, it is not clear that ACT (or any purely behavioural test) can succeed as a sufficient test to establish phenomenal consciousness in an artificial system; furthermore, it is observed that objections along these lines date back at least to Chomsky’s sharp critique [3] of the cognitive vapidity of Skinner’s [8] behaviouralist approach to language.

Keywords: AI, Machine consciousness, Turing test, ACT, Behaviourism

Received: 28 Nov 2017; Accepted: 05 Feb 2018.

Edited by:

Thomas Nowotny, University of Sussex, United Kingdom

Reviewed by:

Tomer Fekete, Ben-Gurion University of the Negev, Israel  

Copyright: © 2018 Bishop. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Prof. John M. Bishop, PHD., Goldsmiths, University of London, Computing, Department of Computing Goldsmiths, London, SE14 6NW, New Cross London, United Kingdom, m.bishop@gold.ac.uk