Skip to main content

REVIEW article

Front. Educ., 05 November 2019
Sec. Teacher Education
Volume 4 - 2019 | https://doi.org/10.3389/feduc.2019.00125

Humanoid Robots as Teachers and a Proposed Code of Practice

Douglas P. Newton* Lynn D. Newton
  • School of Education, Durham University, Durham, United Kingdom

This article will discriminate between kinds of robot, point to its burgeoning development and application in the home and workplace, and describe its growing use in the classroom as a teacher. It will describe its potential to support, for instance, language development, social, and emotional training [e.g., for children with an autistic spectrum disorder (ASD)], and teaching and assessment, and will review researchers', teachers', students', and parents' responses to this use. Some of these responses recognize the potential usefulness of humanoid robots, but also show an awareness that digital “thought” (AI) is not the same as human thought (HI), and show some caution about using robots as teachers. This disparity generates problems and dilemmas. These stem from, for example, a lack of discretion in decision-making, a lack of emotion (other than by simulation), a limited creative ability (in the foreseeable future), the nature of AI/HI relationships, ethical/legal matters, and culturally unsuitable programming. These matters point to the need for forethought about robot roles and for a code of practice for teachers who work with them. Derived from the discussion, such a code is proposed. The introduction of robot teachers will have significant implications for teachers' roles and their professional identity as human teachers move from being often solitary sources of learning to becoming teaching and learning managers who need to provide learning opportunities creatively. The change in teacher identity and the teacher's roles is described.

Introduction

Automation, the replacement of people in the workplace by machines is not something new, but digital technology has increased the capabilities of these machines enormously (see e.g., Fletcher and Webb, 2017). For instance, machines in factories manufacture cars tirelessly, precisely, and quickly, and these cars are increasingly able to drive themselves, and, at the same time, present urgent ethical challenges (Gogoll and Müller, 2017). Artificial intelligence (AI)1 gives these machines their ability to carry out complex tasks with little or no supervision. A robot is one such machine that senses, thinks, and acts and, when this is without external control, it is described as autonomous (Lin et al., 2014). Sometimes, the appearance of these machines is governed entirely by function, but when this function is to interact with people, they may be given anthropomorphic form. Humanoid robots are intended to look and behave somewhat like people and they usually have some means of communicating with them.

Androids are humanoid robots which mimic human form and behavior (Kanda et al., 2009). The pace of development of robotics is rapid, often encouraged by governments for its perceived economic advantages. For instance, the workforce in Japan is declining at a rate which seriously threatens its economy and the expectations of its people. Robots are seen as a part of the solution. As well as using them to manufacture goods, the aim is to put them to use as cleaners, sales assistants, museum guides, carers for the young and old, TV programme presenters, and of particular relevance here, as teachers (Robertson, 2007). This is not a pipedream of robot engineers; Japan and South Korea, for instance, intend to make significant use of humanoid robots within the next decade, while interest in robot as teachers, as it is reflected in the number of publications about them, is increasing around the world (Robertson, 2007; Steinert, 2014).

Robots in the classroom can have diverse uses. Some are objects of study for students to practice programming, others are tools which assist a teacher, some can be learning companions, and others might be autonomous teachers which provide some unit of instruction more or less in its entirely (e.g., Major et al., 2012; Mubin et al., 2013). The purpose of this article is to consider what humanoid robots that teach can do for and to learners in the classroom, and hence, propose a code of practice for working with robot teachers. Like most innovations, there may be a good side and a bad side, and care is needed to foster the former and counter the latter. The roles of the human teacher change over time with needs, new tools and teaching aids, but the capabilities and nature of AI promote teaching robots to new levels of relationship with the teacher and the learner. Instead of an unreflective application of such devices, we feel that there needs to be forethought about how they could and should be used. We begin by describing what humanoid robots in the classroom can and cannot do (currently), and describe what people have said about their use. We then raise some issues which have never arisen with other surrogate teachers, and discuss teachers' roles and identities which would be germane in a world where AI is likely to expand in application and ability.

Humanoid Robots Teaching

Clearly, as objects of study, and for students to learn about robotics, and to practice programming and control, the presence of a robot can have significant advantages for learning (Keane et al., 2016). It is not as objects of study, but the uses they are put to that is the focus of interest here. Engineers have made robots which can move around classrooms, often but not exclusively those of younger children, asking questions, providing information, noting and commenting on answers, and responding to requests. They are able to recognize individual students and maintain a record of those interactions. Frequently working as classroom assistants, they may make useful contributions to learning.

Some robots have been programmed to teach a second language, and have the capacity to do more than a human teacher is generally willing or able to do (e.g., Meghdari et al., 2013). As well as playing games and engaging students in conversation, they can respond to students' commands in the second language (Toh et al., 2016). There can be more value in this than might at first appear. In student-teacher interaction, the human teacher generally controls the conversation, and the student responds. With a robot, the student can have a more balanced dialogue and be the instigator of actions, as would be the case in everyday conversation. Some students also suffer from a more or less crippling performance emotion, like anxiety and embarrassment, which sets up an affective barrier to the development of proficiency in speaking a second language (Newton, 2014). Instead of anxiously interacting with a human teacher, or another student, talking with a robot can be less emotive, and so it provides a potentially useful bridge to conversational proficiency, less anxiety and more positive attitudes to learning (Chen and Chang, 2008; Alemi et al., 2014). In the same way, a robot programmed to be minimally expressive, and to interact indirectly can be a learning companion for a child with an autistic spectrum disorder (ASD). Over time, such children's oversensitivity to human interaction may be reduced by slowly adjusting the robot's behavior. The robot's expression and interaction are increased to accustom these children to some human-like behavior, and help them develop socially (Robins et al., 2009; Esteban et al., 2017). Many people feel inhibited when they have to work with others. Some are too timid to express themselves openly. Lubart (2017) has demonstrated that avatars can enable anonymous participants to take risks with their thinking, to generate ideas, and solve problems. By interacting through avatars—digital substitutes for themselves—diffidence is significantly reduced. Clearly, where direct human interaction presents problems, and where teachers feel it threatens their authority or dignity, robots can be useful (Mubin et al., 2013). A different way of using a robot in the classroom is to have it take the role of student, and the student's role is to teach the robot. In Japan, Tanaka and Matsuzoe (2012) found this “learning by teaching” approach to have potential when they tested it on young children learning English. Of course, the novelty of learning with something new can be engaging, although this is likely to be temporary unless what is learned is, itself, engaging (e.g., Hung et al., 2012). In the context of health education in the Netherlands, children (8–12 years old) have been successfully taught about chronic conditions, like Type 1 Diabetes, using a “personal” robot (one which “develops a user model and adapts the child-robot interaction accordingly”) using games and quizzes engaged in by robot and child (Henkemans et al., 2013, p. 175).

In the same way, there is evidence that robots can support language development, writing skills, teach sign language, enhance reasoning, and some kinds of problem solving, support self-regulated learning (SLR), and foster SLR skills using prompts, help with small group work by answering questions while, at the same time, free the teacher to give more time to other groups and to individuals (Pandey and Gelin, 2017). Of course, some learning and motivational effects may be due to the current novelty of the robot in the classroom, and it is not entirely certain whether, with familiarity, such benefits will persist (Baxter et al., 2015). There are indications that they can decline over time, and that, in some cases (as in vocabulary development), similar learning may be achieved using other devices, like tablet computers (Vogt et al., 2019). Even the social behavior of some classroom robots may, at times, be a distraction which reduces learning (van den Berghe, 2019). Nevertheless, robot teachers have at least some potential to teach successfully.

Attendant Concerns

Putting aside some current technological limitations in artificial intelligence's ability to recognize speech, and its ability to answer follow-up questions, limitations which are likely to become less in the future (Crompton et al., 2018), the problem is that artificial intelligence is not the same as human intelligence, in other words, robots do not think like people. They may do so in ways which achieve the same ends, but this difference creates what Serholt et al. (2017) call “ethical tensions”.

The first one is a matter of privacy. The robot may assess a student's responses, provide feedback, and maintain records, potentially useful for a teacher. It can also use this information to build and store personal profiles which shape its future interactions. This might make its teaching more effective, but when the data is stored without the student's consent, often a minor, it may breach data protection laws, and has the risk that it will be accessed and misused. In such an event, who is responsible for the breach (or, more to the point, for preventing it)? More broadly, who is to blame for any detrimental consequences of a robot's actions (Lin et al., 2014)? A UK government report was of the view that: “It is possible to foresee a scenario where AI systems may malfunction, underperform or otherwise make erroneous decisions which cause harm. In particular, this might happen when an algorithm learns and evolves of its own accord” (SCAI, 2018, p. 135). If the robot will not have to defend its actions, will the teacher, the school, or the manufacturer be legally liable (Asaro, 2007)? Again as a matter of privacy, students can be monitored continuously by a robot, a process which has been used to erode prisoners' resistance, and has come to be known as psychological imprisonment (Serholt et al., 2017)2, but seen as an extra benefit by Johal et al. (2018).

A second concern is about the norms and values which shape a robot's program, giving it social and cultural biases. Programmers are immersed in particular cultures, and unconsciously or otherwise, their cultural norms and values are likely to be reflected in what they have their robot do (Robertson, 2007; Nørskov and Yamazaki, 2018). For instance, in China, it is acceptable for a child to hug a teacher as a mark of gratitude, but this would be frowned on where such physical contact is proscribed (Kanda et al., 2004; Mavridis et al., 2012). Programming might offer a choice of cultures (which, nevertheless, some might see as “ideal” behaviors, or impositions of others' values (Sloman, 2006), but this would be much more difficult when some fundamental ideology shapes the very nature of how the robot will teach.

Another concern is that students may spend a lot of time interacting with robot teachers. Children in particular learn much by imitation (Bandura, 1962; as do some robots, e.g., Schaal, 1999), but if they do not have adult humans to imitate, will their interpersonal behavior be adversely affected? For instance, in Australia, it was found that “the children mimicked the robot's interaction styles, suggesting social modeling was occurring” (Broadbent, 2017, p. 633). This was also noted in the diabetes teaching study mentioned above, where the children encouraged and complimented the robot, mimicked how it spoke, and used its vocabulary. Such children can believe the robot has mental states and feelings, offer it comfort, and tell it secrets (Kahn et al., 2012). This generates unease about the development of an ability to relate to one another with empathy, sympathy, consideration, discretion, tolerance, and some understanding of the human condition. Of course, children often play at being something else, so perhaps the robot could teach them how to interact with people. Social robots are designed to be “user friendly” in that they are intended to interact with people, identify emotions, simulate emotions, and provide appropriate although unfelt (some would say deceitful) responses (c.f. chatbots). These robots are given appearances that people see as friendly, and human-robot emotional attachments can form, albeit one-way (Beran and Ramirez-Serrano, 2011; Toh et al., 2016). A concern is that some children will begin to adapt to and prefer the more amenable and predictable human-robot relationships, lacking in the “reciprocacy of human-human relationships” (Serholt et al., 2017, p. 616). Enfants sauvages, children who grow outside human contact, are known to be deficient in social abilities, and remediation can be difficult, at best (Classen, 1990). We do not suggest that this extreme will be the outcome of being taught by current classroom robots. Indeed, it may be possible to use robots to support social and emotional skills. For example, Leite et al. (2015), in the USA, use small robots to act out a story with moral lessons for young children (6–8 years old), and Wolfe et al. (2018) shared teaching about emotions with a social robot. Serholt and Barendregt (2016) point out that while children do engage socially with robots, interaction may reduce over time, and the child-robot relationship may not be the same as a child-adult or child-child relationship. Very young children, however, have been found to treat a humanoid robot more as a peer than a toy, but the nature of the relationship is likely to change as the child develops (Tanaka et al., 2007). Nevertheless, reduced interaction with people could begin to degrade human-human behavior, simply because there is less opportunity for learning its complexity and subtleties, and how to respond when human-human interaction fails. The risk would be greater if education was automated, particularly for the young, but schooling is unlikely to be entirely automated in the near future, given AI's current limitations. Nevertheless, boundaries of automated teaching will be explored as new applications are envisaged, as when television robots are used to teach children unable to attend school due to illness or remoteness (e.g., Newhart et al., 2016).

Yet another concern stems from the kinds of thinking and habits of mind that robots may promote. Bakshi et al. (2015) have described how the digital revolution will affect the workplace. They see AI as doing those tasks which can be rendered into routines, putting some 47% of jobs in the USA and 35% of jobs in the UK at risk. People will be left largely with work that is currently beyond the capability of AI. Bakshi et al. see this as centered on creative activity and problem solving [which can, of course, involve AI (e.g., Savransky, 2000; Rea, 2001)]. Education needs to respond to this future by preparing its students for it (e.g., DCCE, 2017-19). Can a robot usefully support the range of purposeful thought expected of students in the classroom? As far as memorization and recall of facts, figures, and procedures are concerned, it seems likely. Robots are increasingly able to ask questions, recognize correct answers, exercise students' recall, give immediate feedback, and record students' progress in this kind of purposeful thought. A robot may also expound a topic, present information, direct attention to what matters, and then check for understanding with tasks requiring specific predictions and applications. But how well a contemporary robot can cope with responses that cannot be pre-determined is unclear, as in creative thinking where its products are potentially infinite. Similarly, it is questionable whether the robot could adequately assess thinking which involves personal values, beliefs, and goals, as in decision-making (Newton, 2016). The danger is that what robots can do becomes only what is done, and that is seen as a complete education. Of course, it could be worse if students learned to leave their thinking entirely to AI.

Finally, there is the matter of how the robot teaches. Teaching has been called “emotional labor.” This is not simulated emotion, but felt and acted on emotion. It is what makes teachers devoted to their subject and to teaching, and to teach with passion, not false passion (Newton, 2016). Can a robot, without deceit, feel or even communicate that devotion? Can it, with honesty and belief, bring students to love learning or a subject and give their lives to it? And will it be remembered in years to come as the teacher who made a fundamental difference (Howard, 1998)?

Human Reflections

What do people think about robots as teachers? Care is needed here as experience of classroom robots is, as yet, relatively limited, and non-existent in some parts of the world. Some studies have collected views from those without direct experience. Any expressed willingness to interact with robotic teachers may be due to their novelty, and may decline with familiarity (e.g., Robins et al., 2009; Broadbent, 2017). At the same time, what classroom robots do also varies. For instance, Fridin and Belokopytov's socially assistive robot in Israel could play educational games with pre-school and elementary school students, and teachers were favorable toward using it (although, in this case, they were probably pre-disposed by prior interest; Fridin and Belokopytov, 2014). But cultures and educational systems vary around the world so views in one context may not generalize entirely (or at all) to another. With that in mind, we begin with some views of the general public.

Two pan-European surveys found the general public to be broadly positive about robot applications in general, but with some variation, largely from northern to southern countries, with the former tending to be more favorable than the latter. While four out of ten people were comfortable with using robots in education, more than three out of ten had reservations, and few saw it as a priority. Younger people, men, and those with more years in full-time education tended to be more favorable (TNS Opinion and Social, 2012, 2015). A thought-provoking study by Mavridis et al. (2012) in the Middle East usefully addresses the gap between Western and Far Eastern studies. Interaction with their android robot (Ibn Sina, simulating an Islamic philosopher of that name of a thousand years ago) provided the opportunity to collect responses from conference delegates from around the world. As far as children's education is concerned, this opens a window into what parents' responses might be. Those from South-East Asia were more positive about the prospect of robots teaching their children than those from other parts of the world. The Far East is where there is a lot of research on such applications. Nevertheless, there can still be some hesitation about using robots as teachers (Lee et al., 2008). Those from Europe and the USA recognized that children might like it but had reservations about their use. This indicates the current climate surrounding the potential adoption of robot teachers.

Conde et al. (2016) report their exploratory study in Spain of a robot, Baxter, teaching students from roughly Kindergarten to 18 years of age. Baxter could be described as being toward the less humanoid end of the spectrum in appearance. Most of the students said that they felt comfortable interacting with robots, and the younger ones in particular thought they could be friends with them. The effect of direct experience with the robot did tend to lessen concerns about interacting with them, and, as might be expected, the younger children tended to be less critical. Broadbent et al. (2018), p. 295 suggest that “children accept robots easily because they have a natural ability to empathize with objects and interactive devices,” as is the case with some of their toys. Their study, in New Zealand, was of a similar age range to that of Conde et al. but their robots were designed to be more toy-like in appearance (one, Paro, was like a young seal). Again, most children were comfortable with the robots, and talked with them; the highest level of engagement, however, was by the primary/intermediate school children. It has to be remembered, however, that these robots were intended largely for use as companions in isolated rural schools with small student numbers, with some application in, for example, practicing mathematics or a language. When they are used mainly as teaching assistants, such students say they are willing to talk to them, and be their friends, although few wanted robots to grade their work, monitor their behavior, or replace human teachers (Fridin and Belokopytov, 2014). Earlier studies in Japan by Kanda et al. (2004, 2009) and in South Korea (Shin and Kim, 2007) also found that children established what might be called friendly relationships with a robot, even when it was not particularly humanoid in appearance.

Serholt et al. (2017), p. 295 held focus groups of practicing and pre-service teachers studying for a Master's degree in Education in Sweden, Portugal, and the UK. The groups were generally positive about digital technology although none had direct experience of robot teachers. Regarding matters of privacy, the teachers pointed out that data about students were already stored electronically, but there was some concern about the nature of what would be stored, the risk of unauthorized use or use by state agencies, surveillance, and the lack of control of that data by the students or their parents. Nevertheless, there was some feeling that this should be set against a background of society's decline in concern for privacy in the digital age. It was felt that robots could be useful in routine teaching (referred to as “training”), but that they lacked the perception and insight needed to help students overcome their difficulties, and should not take decisions affecting students that teachers are uniquely able to fulfill [like grading students' work; noted in an earlier study by Serholt et al. (2014)]. Their concern was that this would not be recognized, and autonomous robots would eventually replace teachers. As a consequence there would be a dehumanization of children, and they would become over-reliant on robots for their thinking. Some also felt that working with robot assistants would make teachers passive and over-reliant on AI for what happened in the classroom. For these teachers, an acceptable role for the classroom robot was seen as a controlled, instrumental one. In the UK, Kennedy et al. (2016) similarly found what they call “cautious but potentially accepting” attitudes amongst some primary teachers in the UK, with an additional concern that some children might be isolated by interacting mainly with the classroom robot. They point out that teachers' beliefs and attitudes are important as they at least partly determine if and how technology is used. At the same time, we must bear in mind the children's increasing exposure to digital devices in their homes.

Given these views, Johal et al. (2018) claim that there is resistance to the acceptance of robots is a little overstated. On balance, these studies indicate that, at this stage, there is a cautious interest in their use as teaching assistants. There could, however, be another side to this in the future: there is also the robot's point of view. Baxter et al. (2015) noticed that teachers seem inclined to treat robots as having particular roles, like that of “informant.” Steinert (2014) has grouped robots in general into those which are obedient instruments, and those which autonomously take decisions and act on them. The first cannot be held responsible for their actions while the second may, in due course, become so. If they do, how will being an informer affect their role? Steinert adds that humans may behave toward robots in various ways, and mentions that children tend to treat robots as they do animals. Children have also been observed to abuse robots when adults are absent (see, e.g., Broadbent, 2017). Few would doubt that human teachers have a right to safety and freedom from bullying in the workplace, but at what point will the robot be allowed such rights? This may be a matter for the future, but if some future autonomous robot is to be nothing more than a slave like slaves of the Ancient World (and “robot,” coined by Karel Capek in the 1920s, refers to the coerced laborer of central Europe's feudal system; Robertson, 2007), it is a question which may need to be answered. This, however, may be a concern for a later generation, but some speculation already touches on it (McCauley, 2007; Gunkel, 2018), and a definition of “cyberlife” has been contemplated (Korzeniewski, 2001).

Education, Robots, and Teacher

Aids to teaching and learning are not, of course, new. Textbooks, for instance, are long-standing surrogate teachers which have found wide application around the world, but no-one has concerns that children will behave like a book. Humanoid robots, however, are more active, even pro-active. Unlike the passive textbook, they can respond and adapt to each student, tailoring teaching to particular needs. There is clear evidence that they have the potential to support learning, as in teaching children about their medical conditions, developing and rehearsing learning, and testing it. They can also take on teaching roles which human teachers may find time-consuming, uncomfortable, inhibiting, or unfeasible. For example, they can patiently help a student practice a skill or procedure, practice conversation in a foreign language, or act dumb and be “taught” by the student. They can even do what a teacher would find difficult by his or her presence, as in teaching an ASD student while slowly accustoming that student to social interaction. Belpaeme et al. (2018), p. 7 provide a positive and well-evidenced evaluation of the potential of robots to enhance learning through their physical presence in the classroom. They concluded that, “Robots can free up precious time for human teachers, allowing the teacher to focus on what people still do best: providing comprehensive, empathetic, and rewarding educational experience.” But, beyond a mere division of labor, there is likely to be an increasing potential for a productive collaboration between HI and AI (Ball, 2019).

To set against this are concerns about privacy, malfunction, and perpetual surveillance. Matters of privacy and legal responsibility may be eased through legislation, although probably not eliminated. EPSRC (2011) principles for the design and manufacture of robots makes humans responsible for all that a robot does, but wants robots to be designed “as far as practicable to comply with laws, rights, and freedoms, including privacy,” a leeway criticized by Müller (2017), but one reflecting the practical limits of what is currently implementable (McCauley, 2007)3. A fail-safe approach to robot manufacture and some form of override control may minimize malfunction effects, and time-out for the robot would give students a break from its all-seeing vigilance. There may also be concerns about robots which make decisions about what is educationally appropriate for a particular student. AI decisions may not be the same as those of a human teacher who understands a student's motives, values, and goals, and the emotions which drive the student's behaviors, and so can exercise discretion, or tune a decision to allow for these. There are also concerns about effects on habits of mind a robot may encourage. Entirely dispassionate thought is not possible for people, and, at times, it may be inappropriate for a fulfilling, rewarding life (Newton, 2018). It may result in a neglect of certain kinds of purposeful thinking (although there is the risk of that with a human teacher; Newton and Newton, 2000). There are, however, concerns about its effect on human-human interaction and relationships. Children have been observed to mimic the robot and treat it as though it is like themselves, but children in their pretend play may give certain toys human attributes, and become emotionally attached to them. Nevertheless, children generally know that their toys are not alive. Robots, however, are becoming increasingly anthropomorphic and it remains an open question how well children will discriminate between robots, animals and humans in the future. Clearly, robots can shape social behavior, as their effect on ASD students has shown, so this seems to have some foundation. In addition, technology can change and even encourage new behaviors, as with the Smart phone and immersion in some virtual worlds (e.g., Persky and Blascovich, 2006). Robots may shape the kinds of thinking that are practiced and tested, and even an inclination to think in certain ways. Belpaeme et al. (2018), p. 7 ask how far we want to delegate education to machines with the risk that “what is technologically possible is prioritized over what is actually needed by the learner.” At times, the notion of technological determinism has been a contentious one (for a discussion, see e.g., Paragas and Lin, 2016; Hauer, 2017). In this context, however, there is a danger that the capabilities of the technology could directly determine what is learned. There is also the matter of equity of access. Students in different parts of the world, and even in one region, are likely to vary in the amount and kind of access they have to digital technology, including robot teachers (see, e.g., Dimaggio et al., 2004; Pöntinen et al., 2017, or perhaps more pertinent here, to human teachers).

This should be seen in a setting of moderately positive attitudes toward robot teaching assistants, although parents and teachers are more cautious than students. This caution generally does not reflect an anti-robot attitude, and may be useful in prompting constructive thought about robot-student interaction. We should expect robots to continue to develop in capability, and adults' concerns may be greater if the robot was the sole teacher. However, as robots become more common in the home, at work, in hospitals, and in the high street, so, too, are they likely to be accepted in the classroom. This, in itself, is not a bad thing. We must prepare children for the world in which they will live, and enable them to develop “digital literacy.” As a UK Select Committee (SCAI, 2018, p. 77) suggest: “All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.” Nevertheless, we should consciously decide when we will draw on a classroom robot's potential, rather than drift haphazardly into its use.

If the benefits of AI in the classroom maximized, and potential harm minimized, teachers' roles will need to change. It has always been easier to give more time to memorization than to understanding, and to creative, evaluative and wise thinking, but the problem today is more one of dispelling the data smog and using information wisely, than committing data to memory (e.g., Newton, 2012). In the future, competence in creative and evaluative thinking is likely to confer greater advantages than today. The balance of a teacher's role will need to move much further in this direction (and examinations will need to do the same if they are not to undermine the move). Teachers will need to see their primary goal as developing their students' competence in these kinds of productive thought. This, of course, requires knowledge of strategies and activities which support it, and this may involve professional development and training. But these teachers will be supported by robot assistants whose current strengths may not lie in this direction. Teachers will need to learn to use them in ways which support this move in thinking, as when they help a student acquire pre-requisite skills and understanding, assist students with a ready supply of information so they can practice creative and critical thinking, and show them unlimited patience in developing ideas, and putting them into effect through scaffolding activities. But, during this, the teacher must watch for any tendency of students to economize mental effort and leave thinking to the robot. In the process, however, we must take care that students, particularly the more vulnerable and younger students, are not adversely affected by digital technology. With such children, the teacher will have to be equally watchful for any tendencies to adopt dehumanizing robot behaviors, other than playfully. The development of an understanding and appreciation of human behavior, particularly the emotions and how they shape it remain a priority. Nevertheless, children may need to learn to interact with robots appropriately, bearing in mind that in their lifetimes, humanoid robots will become more sophisticated. Teachers plan what they will teach, but in the future, this planning will need to include the robot assistant, be more reflective about what is being learned both formally and informally, and, importantly, what is being overlooked. This oversight role of teaching may be supported by the robot's ability to collect and maintain information about each child, and even to recommend what may be needed next, but the teacher will need to defend that data from illegitimate access. Given the speed at which AI is developing and is realized in robot form, it is surprising how little attention is being given to it by those who concern themselves with “the future of education” (e.g., BERA, 2016; DeArmond et al., 2018). The teacher's role is, of course, culture dependent, and what is appropriate in one milieu may not be in another. This means that variations on the theme are to be expected, and some may be radical, but the presence of a robot teacher will bear upon what a teacher does and can do, wherever it is. With this caveat, we venture to offer a code of practice for teaching with robots.

Before enlisting the help of a robot teacher, a school should develop a policy on its use which should be reconsidered from time to time, and especially when that robot is supplemented or replaced by a more capable robot. This should include a code of practice. Suggested by the above account and relating to current classroom robots, we offer the following:

A Code of Practice4

1. There should be a collective judgement of the suitability of the assumptions, values and beliefs reflected in the robot's teaching, and also about matters that should be reserved for the human teacher.

2. A human teacher should be responsible for arranging and managing the learning environment, and for the kinds and quality of teaching and learning which takes place.

3. A human teacher should be present when a robot teacher is in use5.

4. Care should be taken to ensure that data collected by the robot or human teacher is secure, and is maintained only for the minimum length of time it is needed, after which it is destroyed6.

5. Decisions taken by a robot about teaching and learning should be monitored and, if judged inappropriate, changed at the teacher's discretion.

6. Younger children should not interact only or predominantly with a robot teacher; an upper limit of time in robot-human interaction should be imposed7.

7. The teacher should ensure that young children see, experience and reflect on human-human interaction in ways which illustrate its nature, and exercise the skills of interpersonal behavior.

8. The teacher should ensure that children interact with robot teachers appropriately.

9. Care should be taken to discourage a habit of shallow thinking arising from robot use, or of leaving thinking and decisions to the robot teacher.

10. Care should be taken to ensure that children exercise a wide range of thought in the classroom, giving due weight to higher levels of purposeful thinking and to thinking dispositions, and for which the human teacher should be largely responsible8.

Teacher identity, or what it means to be a teacher, is an evolving complex collection of personal roles, behavioral norms, and social and cultural expectations (Akkerman and Meijer, 2011). The Code of Practice, as a collection of expected roles and norms of behavior, would influence a teacher's professional sub-identity relating to working with robots. Roles centered upon the student-teacher relationship are a main feature of teacher identity (Zembylas, 2003). Of particular relevance are behavioral norms associated with the expression of emotions (van der Want et al., 2018). For instance, in some educational systems a steering, friendly, and understanding teacher is generally seen as more appropriate than an uncertain, reprimanding, dissatisfied teacher. The Code of Practice recommends that the nature of human-human and robot-human interaction becomes a conscious concern. At the same time, some human teachers may need to concern themselves less with their students' acquisition of information, routines and procedures and more with developing their competence in open-ended kinds of thinking. And examinations need to reflect this if they are not to impede such a change. But overarching this is the need for teachers to be managers of teaching and learning, creative providers of learning experience, and imaginative users of available resources (including themselves) to meet the needs of their students in a digital age. Of course, where the emphases lie may vary with student age. Whatever the phase of education, the robot may present dilemmas, but it also has the potential to free teachers so they can think more about the kinds of learning (formal and informal), the direction they should take, and the particular needs of individuals. This probably represents the main change in teacher identity that soon may be needed.

Conclusion

GRIN technologies (genomics, robotics, information, and nanotechnologies) are changing the way we learn, play, work, and interact (O'Hara, 2007). Robot teachers offer opportunities but also challenges for teachers, unlike the classroom aids of the past. On the one hand, they make new ways of teaching and learning possible, and their presence helps to prepare children for a world of AI-enabled products with which they will have to interact daily. On the other hand, robots may degrade human interaction, encourage laziness in thinking, and narrow what is exercised to what robots can do. Their novelty is attractive, but there are understandable reservations about their use in the classroom. There is a danger that we will drift into the future without forethought about how to use and not use robot teachers (SCAI, 2018). With robots present, human teachers' roles will need to change in order to maximize the benefits while minimizing the detriments. At times, this should go beyond a simple division of labor between teacher and robot (HI and AI), to include collaborative teaching between HI and AI when, together, they produce a more effective learning experience for the students, and illustrate HI/AI collaboration, modeled by the teacher. Working with highly sophisticated robots is likely to bring about a change in teacher identity, moving it from a largely solitary responsibility for students' learning toward a more or less joint enterprise, but one in which the human teacher has oversight of and manages the teaching. In the foreseeable future, robot teachers are likely to have a significant impact on teaching.

As we move into this new world, human teachers will probably need training to work with robot teachers and in AI/HI collaboration in the classroom, and they and their trainers should reflect on its pros and cons. This should be informed by research. For instance, teachers need to know how children's relationships with robot teachers change over time and with use. Some tools which may lend themselves to such research are emerging (e.g., Spirina et al., 2015), but teachers would also find clues to evolving relationships and the development of children's personal identities useful in the classroom. We often see research about how to support learners' engagement with new technologies. This is not a bad thing as it prepares children for the world in which they will live. But we also need research on when not to use a particular kind of digital technology, and on how to teach children (and adults) to use such technologies with discernment and discretion. The growing capabilities of AI also bring with them matters of ethics that need to be addressed and monitored. There needs to be collaboration between, for instance, robot engineers, programmers, teachers, sociologists, and ethicists to ensure that rights are observed, and cultural and ideological matters considered effectively.

Humanoid robot teachers have the potential to make a useful contribution in the classroom, and they will become more autonomous and more capable over time, but they do not think and feel like people. Those who work with them will need to think in different ways about what they do. But one thing they should bear in mind is the need for children to learn to be human (Macmurray, 2012).

The Code of Practice and reflection on teachers' roles and identities offered here is intended to support that forethought and preparation.

Author Contributions

DN researched and wrote the first draft of the article. LN invited, collated, and integrated colleagues' and practicing teachers' comments on it, particularly in relation to iterations of the Code of Practice. Both also collaborated to finalize the article.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1. ^The definition of AI adopted by the UK's Select Committee on Artificial Intelligence (SCAI, 2018, p. 13) is: “Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation.”

2. ^This concern extends to other forms of surveillance technology (see e.g., van den Hoven and Vermaas, 2007; Soroko, 2016; Perry-Hazan and Birnhack, 2018).

3. ^In this context, some refer to Asimov's Laws of Robotics (in brief, directing that robots must serve and never harm humans), but the laws have proved to be too ambiguous to implement in current robots, and if future robots can “understand” them, they would have reached a stage where it would be unethical to apply the laws to them, as the laws would enslave such robots (see, e.g., Clarke, 2011).

4. ^We thank those colleagues and practicing teachers who kindly commented on summaries of the article and drafts of this Code of Practice.

5. ^At some point, this may need modification should autonomous robots become more competent and fail-safe. Here, it is a precautionary suggestion which assumes that the human teacher has the off-switch.

6. ^Laws regarding data management vary from place to place.

7. ^The upper limit may depend on child age and robot function: if used to overcome a disability or disadvantage, for instance, a different limit may be appropriate.

8. ^There may, of course, be ways in which a robot can support a teacher in this work.

References

Akkerman, S. F., and Meijer, P. C. (2011). A dialogical approach to conceptualizing teacher identity. Teach. Teach. Educ. 27, 308–319. doi: 10.1016/j.tate.2010.08.013

CrossRef Full Text | Google Scholar

Alemi, M., Meghdari, A., and Ghazisaedy, M. (2014). Employing humanoid robots for teaching English language in Iranian junior high-schools. Int. J. Hum. Robot. 11:1450022. doi: 10.1142/S0219843614500224

CrossRef Full Text | Google Scholar

Asaro, P. M. (2007). “Robots and responsibility from a legal perspective,” in Proceedings of the IEEE. Available online at: http://www.roboethics.org/icra2007/contributions/ASARO%20Legal%20Perspective.pdf (accessed October 17, 2019).

Google Scholar

Bakshi, H., Frey, C. B., and Osborne, M. (2015). Creativity vs. Robots. London: Nesta.

Google Scholar

Ball, P. (2019). AI beautiful mind. Chemistry World 16, 31.

Google Scholar

Bandura, A. (1962). “Social learning through imitation,” in Nebraska Symposium on Motivation, ed M. R. Jones (Oxford: University of Nebraska Press, 211–274.

Google Scholar

Baxter, P., Ashurst, E., Kennedy, J., Senft, E., Lemaignan, S., and Belpaeme, T. (2015). “The wider supportive role of social robots in the classroom for teachers,” in 1st Int. Workshop on Educational Robotics at the Int. Conf. Social Robotics (Paris).

Google Scholar

Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., and Tanaka, F. (2018). Social robots for education: a review. Sci. Robot. 3:eaat5954. doi: 10.1126/scirobotics.aat5954

CrossRef Full Text | Google Scholar

BERA (2016). The Future of Education. London: British Educational Research Association.

Google Scholar

Beran, T., and Ramirez-Serrano, A. (2011). “Can children have a relationship with a robot?” in Human-robot Personal Relationships, eds M. Lamers and F. Verbeek (Heidelberg: Springer), 49–56. doi: 10.1007/978-3-642-19385-9_7

CrossRef Full Text | Google Scholar

Broadbent, E. (2017). Interactions with robots. Annu. Rev. Psychol. 68, 627–652. doi: 10.1146/annurev-psych-010416-043958

PubMed Abstract | CrossRef Full Text | Google Scholar

Broadbent, E., Feerst, D. A., Lee, S. H., Robinson, H., Albo-Canals, J., Ahn, H. S., et al. (2018). How could companion robots be useful in rural schools? Int. J. Soc. Robot. 10, 295–307. doi: 10.1007/s12369-017-0460-5

CrossRef Full Text | Google Scholar

Chen, G. D., and Chang, C. W. (2008). “Using humanoid robots as instructional media in elementary language education,” in Second IEEE International Conference on Digital Game and Intelligent Toy Enhanced Learning (IEEE), 201–202. doi: 10.1109/DIGITEL.2008.17

CrossRef Full Text | Google Scholar

Clarke, R. (2011). “Asimov's laws of robotics,” in Machine Ethics, eds M. Anderson and S. L. Anderson (Cambridge: Cambridge University Press), 254–284.

Google Scholar

Classen, C. (1990). La perception sauvage. Etude sur les orders sensoriels des enfant ‘sauvages’. Les. Cinq. Sens. 14, 47–56. doi: 10.7202/015127ar

CrossRef Full Text | Google Scholar

Conde, M. Á., Fernández, C., Rodríguez-Lera, F. J., Rodríguez-Sedano, F. J., Matellán, V., and García-Peñalvo, F. J. (2016). “Analysing the attitude of students towards robots when lectured on programming by robotic or human teachers,” in Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality (Salamanca: ACM), 59–65.

Google Scholar

Crompton, H., Gregory, K, and Burke, D. (2018). Humanoid robots supporting children's learning in an early childhood setting. Br. J. Educ. Technol. 49, 911–927. doi: 10.1111/bjet.12654

CrossRef Full Text | Google Scholar

DCCE (2017-19). The Durham Commission on Creativity and Education. Available online at: https://www.durham.ac.uk/creativitycommission (accessed October 17, 2019).

Google Scholar

DeArmond, M., Campbell, C., and Hill, P. (2018). The Uncertain Future of Teaching. Seattle, WA: Center on Reinventing Public Education.

Google Scholar

Dimaggio, P., Hargittai, E., Celeste, C., and Shafer, S. (2004). “Digital inequality,” in Social Inequality, ed K. M. Neckerman (New York, NY: Russell Sage), 355–400.

Google Scholar

EPSRC (2011). Principles of Robotics. Available online at: https:/www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesof_robotics/ (accessed October 17, 2019).

Google Scholar

Esteban, P. G., Baxter, P., Belpaeme, T., Billing, E., Cai, H., Cao, H.-L., et al. (2017). How to build a supervised autonomous system for robot-enhanced therapy for children with autism. Paladyn J. Behav. Robot. 8, 18–38. doi: 10.1515/pjbr-2017-0002

CrossRef Full Text | Google Scholar

Fletcher, S. R., and Webb, P. (2017). “Industrial robot ethics, a world of robots,” in 2015 ICRE International Conference on Robot Ethics (Lisbon), 159–169.

Google Scholar

Fridin, M., and Belokopytov, M. (2014). Acceptance of socially assistive humanoid robot by preschool and elementary school teachers. Comp. Hum. Behav. 33, 23–31. doi: 10.1016/j.chb.2013.12.016

CrossRef Full Text | Google Scholar

Gogoll, J., and Müller, J. F. (2017). Autonomous cars: in favor of a mandatory ethics setting. Sci. Eng. Ethics 23, 681–700. doi: 10.1007/s11948-016-9806-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Gunkel, D. J. (2018). Robot Rights, Cambridge: MIT Press. doi: 10.7551/mitpress/11444.001.0001

CrossRef Full Text | Google Scholar

Hauer, T. (2017). Technological determinism and new media. Int. J. Engl. Literat. Soc. Just. 2, 1–4. Available online at: https://ijels.com/upload_document/issue_files/1%20IJELS-MAR-2017-8-Technological%20determinism%20and%20new%20media.pdf (accessed October 17, 2019)

Google Scholar

Henkemans, O. A. B., Bierman, B. P. B, Janssen, J., Neerincx, M. A., Looije, R., van der Bosch, H., et al. (2013). Using a robot to personalise health education for children with diabetes type. Patient Educ. Couns. 92, 174–181. doi: 10.1016/j.pec.2013.04.012

CrossRef Full Text | Google Scholar

Howard, V. A. (1998). Virtuosity in teaching. J. Aesthetic Educ. 32, 1–16. doi: 10.2307/3333379

CrossRef Full Text | Google Scholar

Hung, I.-C., Chao, K.-J., Lee, L., and Chen, N.-S. (2012). Designing a robot teaching assistant for enhancing and sustaining learning motivation. Interact. Learn. Environ. 21, 156–171. doi: 10.1080/10494820.2012.705855

CrossRef Full Text | Google Scholar

Johal, W., Castellano, G., Tanaka, F., and Oklta, S. (2018). Robots for learning. Int. J. Soc. Robot. 10, 293–294. doi: 10.1007/s12369-018-0481-8

CrossRef Full Text | Google Scholar

Kahn, P. H. Jr, Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., et al. (2012). “Robovie, you'll have to go into the closet now”. Children's social and moral relationships with a humanoid robot. Dev. Psychol. 48, 303–314. doi: 10.1037/a0027033

CrossRef Full Text | Google Scholar

Kanda, T., Hirano, T., Eaton, D., and Ishiguro, H. (2004). Interactive robots as social partners and peer tutors for children. Hum. Comp. Interact. 19, 61–84. doi: 10.1207/s15327051hci1901&2_4

CrossRef Full Text | Google Scholar

Kanda, T., Nishio, S., Ishiguro, H., and Hagita, N. (2009). Interactive robots and androids in children's lives. Child. Youth Environ. 19, 12–33. doi: 10.7721/chilyoutenvi.19.1.0012

CrossRef Full Text | Google Scholar

Keane, T., Chalmers, C., Williams, M., and Boden, M. (2016). “The impact of humanoid robots on students' computational thinking,” in Humanoid Robots and Computational Thinking, ACCE Conference (Brisbane). Available online at: https://eprints.qut.edu.au/112919/ (accessed March 15, 2019).

Google Scholar

Kennedy, J., Lemaignan, S., and Belpaeme, T. (2016). “The cautious attitude of teachers towards social robots in schools,” in Robots 4 Learning Workshop at IEEE RO-MAN 2016. Available online at: https://biblio.ugent.be/publication/8528358/file/8528360 (accessed March 15, 2019).

Google Scholar

Korzeniewski, B. (2001). Cybernetic formulation of the definition of life. J. Theor. Biol. 209, 275–286. doi: 10.1006/jtbi.2001.2262

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, E., Lee, Y., Kye, B., and Ko, B. (2008). “Elementary and middle school teachers', students' and parents' perception of robot-aided education in Korea,” in Proceedings of ED-MEDIA 2008 World Conference on Educational Multimedia, Hypermedia & Telecommunications, eds J. Luca and E. Weippl (Vienna: Association for the Advancement of Computing in Education), 175–183. Available online at: https://www.learntechlib.org/primary/p/28391/ (accessed March 15, 2019).

Google Scholar

Leite, I., McCoy, M., Lohani, M., Ullman, D., Salomons, N., Stokes, C., et al. (2015). “Emotional storytelling in the classroom: Individual versus group interaction between children and robots,” in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (Portland: ACM), 75–82.

Google Scholar

Lin, P., Abney, K., and Bekey, G. A. (2014). Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge: The MIT Press.

Google Scholar

Lubart, T. (2017). “Creative thinking in virtual reality environments,” in Keynote Presentation at the 15th International ICIE Conference on Excellence Innovation & Creativity (Lisbon).

Google Scholar

Macmurray, J. (2012). Learning to be human. Oxford Rev. Educ. 38, 661–674. doi: 10.1080/03054985.2012.745958

CrossRef Full Text | Google Scholar

Major, L., Kyriacou, T., and Brereton, O. P. (2012). Systematic literature review: teaching novices programming using robots. IET Softw. 6, 502–513. doi: 10.1049/iet-sen.2011.0125

CrossRef Full Text | Google Scholar

Mavridis, N., Katsaiti, M.-S., Naef, S., Falasi, A., Nuaimi, A., Araifi, H., et al. (2012). Opinions and attitudes toward humanoid robots in the Middle East. AI Soc. 27, 517–534. doi: 10.1007/s00146-011-0370-2

CrossRef Full Text | Google Scholar

McCauley, L. (2007). The Frankenstein Complex and Asimov's Three Laws. Association for the Advancement of Artificial Intelligence. Available online at: https://www.aaai.org/Papers/Workshops/2007/WS-07-07/WS07-07-003.pdf (accessed March 15, 2019).

Google Scholar

Meghdari, A., Alemi, M., Ghaazisaidi, M., Tahen, A. R., Karimian, A., and Vakih, M. Z. (2013). “Applying robots as teaching assistant in EFL classes in Iranian middle schools,” in Proceedings of the 2013 International Conference on Education and Modern Educational Technologies (Beijing), 68–73.

Google Scholar

Mubin, O., Stevens, C. J., Shahid, S., Al Mahmud, A., and Dong, J. J. (2013). A review of the applicability of robots in education. J. Technol. Educ. Learn. 1:13. doi: 10.2316/Journal.209.2013.1.209-0015

CrossRef Full Text | Google Scholar

Müller, V. C. (2017). Legal versus ethical obligations. Conn. Sci. 29, 137–141. doi: 10.1080/09540091.2016.1276516

CrossRef Full Text | Google Scholar

Newhart, V. A., Warschauer, M., and Sander, S. (2016). Virtual inclusion via telepresence robots in the classroom. Int. J. Technol. Learn. 23, 9–25. doi: 10.18848/2327-0144/CGP/v23i04/9-25

CrossRef Full Text | Google Scholar

Newton, D. P. (2012). Teaching for Understanding. London: Routledge.

Google Scholar

Newton, D. P. (2014). Thinking With Feeling. London: Routledge.

Google Scholar

Newton, D. P. (2016). In Two Minds. Ulm: ICIE.

Google Scholar

Newton, D. P. (2018). “Emotions: Can't think with them, can't think without them,” in The Theory of Teaching Thinking, eds R. Wegerif and L. Kerslake (London: Routledge, 21–40.

Google Scholar

Newton, D. P., and Newton, L. D. (2000). Do teachers support causal understanding through their discourse when teaching primary science? Br. Educ. Res. J. 26, 599–613. doi: 10.1080/713651580

CrossRef Full Text | Google Scholar

Nørskov, M., and Yamazaki, R. (2018). “Android robotics and the conceptualization of human beings,” in Envisioning Robots in Society, eds M. Coekelbergh, J. Moh, M. Funk, J. Seibt and M. Norskov (Amsterdam: IOS Press, 238–246.

Google Scholar

O'Hara, M. (2007). Strangers in a strange land. Futures 39, 930–941. doi: 10.1016/j.futures.2007.03.006

CrossRef Full Text | Google Scholar

Pandey, A. K., and Gelin, R. (2017). “Humanoid robots in education,” in Humanoid Robotics: A Reference, eds A. Goswami and P. Vadakkepat (New York, NY: Springer), 1–16. doi: 10.1007/978-94-007-7194-9_113-1

CrossRef Full Text | Google Scholar

Paragas, F. C., and Lin, T. T. C. (2016). Organizing and reframing technological determinism. N. Media Soc. 18, 1528–1546. doi: 10.1177/1461444814562156

CrossRef Full Text | Google Scholar

Perry-Hazan, L., and Birnhack, M. (2018). The hidden human rights curriculum of surveillance cameras in schools: due process, privacy and trust. Cambr. J. Educ. 48, 47–64. doi: 10.1080/0305764X.2016.1224813

CrossRef Full Text | Google Scholar

Persky, S., and Blascovich, J. (2006). “Consequences of playing violent games in immersive virtual environments,” in Avatars at Work and Play, eds R. Schroeder and A. S. Axelsson (Dordrecht: Springer), 167–186.

Google Scholar

Pöntinen, S., Dillon, P., and Väisänen, P. (2017). Student teachers' discourse about digital technologies and transitions between formal and informal learning contexts. Educ. Inf. Technol. 22, 317–335. doi: 10.1007/s10639-015-9450-0

CrossRef Full Text | Google Scholar

Rea, K. C. (2001). TRIZ and Software - 40 Principle Analogies, Part 1. Available online at: https://www.leanmethods.com/sites/leanmethods.com/files/TRIZ%20and%20Software%20-%2040%20Principle%20Analogies,%20Part%201.pdf (accessed April 24, 2019).

Google Scholar

Robertson, J. (2007). Robo sapiens japanicus: human robots and the posthuman family. Crit. Asian Stud. 39, 369–398. doi: 10.1080/14672710701527378

CrossRef Full Text | Google Scholar

Robins, B., Dautenhahn, K., and Dickerson, P. (2009). “From isolation to communication,” in 2nd International Conference on Advances in Computer-Human Interactions (Cancun), 205–211.

Google Scholar

Savransky, S. D. (2000). Introduction to TRIZ Methodology of Inventive Problem Solving. New York, NY: CRC Press.

Google Scholar

SCAI (Select Committee on Artificial Intelligence) (2018). AI in the UK: Ready, Willing and Able? HL Paper 100. London: HMSO. Available online at: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf (accessed April 20, 2019).

Google Scholar

Schaal, S. (1999). Is imitation learning the route to humanoid robots? Trends Cogn. Sci. 3, 233–242. doi: 10.1016/S1364-6613(99)01327-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Serholt, S., and Barendregt, W. (2016). “Robots tutoring children: Longitudinal evaluation of social engagement in child-robot interaction,” in Proceedings of the 9th Nordic Conference on Human-Computer Interaction (Gothenburg: ACM), 64.

Google Scholar

Serholt, S., Barendregt, W., Leite, I., Hastie, H., Jones, A., Paiva, A., et al. (2014). “Teachers' views on the use of empathetic robotic tutors in the classroom,” in 23rd IEEE International Symposium on Robot and Human Interactive Communication (Edinburgh: IEEE), 955–960.

Google Scholar

Serholt, S., Barendregt, W., Vasalou, A., Alves-Oliveira, P., Jones, A., Petisca, S., et al. (2017). The case of classroom robots: teachers' deliberations on the ethical tensions. AI Soc. 32, 613–631. doi: 10.1007/s00146-016-0667-2

CrossRef Full Text | Google Scholar

Shin, N., and Kim, S. (2007). “Learning about, from, and with Robots: Students' Perspectives,” in RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication (IEEE) 1040–1045. Available online at: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4415235 (accessed March 15, 2019).

Google Scholar

Sloman, A. (2006). Why Asimov's Three Laws of Robotics Are Unethical. Available online at: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/asimov-three-laws.html (accessed March 12, 2019).

Google Scholar

Soroko, A. (2016). No child left alone: the classdojo app. Our Schools Our Selves, 25, 63–74. Available online at: https://www.researchgate.net/profile/Agata_Soroko/publication/304627527_No_child_left_alone_The_ClassDojo_app/links/5775607508ae1b18a7dfdeed.pdf (accessed October 17, 2019).

Google Scholar

Spirina, A. V., Semenkin, E. S., Schmitt, A., and Minker, W. (2015). Interaction quality in human-human conversations. J. Siber. Fed. Univ. 8, 217–223. Available online at: http://elib.sfu-kras.ru/handle/2311/16809 (accessed October 17, 2019).

Google Scholar

Steinert, S. (2014). The five robots – a taxonomy of roboethics. Int. J. Robot. 6, 249–260. doi: 10.1007/s12369-013-0221-z

CrossRef Full Text | Google Scholar

Tanaka, F., Cicourel, A., and Movellan, J. R. (2007). Socialization between toddlers and robots at an early childhood education center. PNAS 104, 17954–17958. doi: 10.1073/pnas.0707769104

PubMed Abstract | CrossRef Full Text | Google Scholar

Tanaka, F., and Matsuzoe, S. (2012). “Learning verbs by teaching a care-receiving robot by children: An experimental report,” in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction (Boston, MA: ACM), 253–254.

Google Scholar

TNS Opinion Social (2012). Special Eurobarometer 382: Public attitudes towards robots. Brussels: European Union. Available online at: http://ec.europa.eu/commfrontoffice/publicopinion/archives/ebs/ebs_382_en.pdf (accessed March 22, 2019).

Google Scholar

TNS Opinion Social (2015). Special Eurobarometer 427: Autonomous systems. Brussels: European Union. Available online at: http://data.europa.eu/euodp/en/data/dataset/S2018_82_4_427_ENG (accessed March 22, 2019).

Google Scholar

Toh, L., Causo, A., Pei-Wen, T., Chen, I.-M., and Yeo, S. H. (2016). A review of the use of robots in education and young children. J. Educ. Technol. Soc. 19, 148–163. Available online at: https://www.jstor.org/stable/10.2307/jeductechsoci.19.2.148 (accessed October 17, 2019).

Google Scholar

van den Berghe, R. (2019). Social robots for language learning. Rev. Educ. Res. 89, 259–295. doi: 10.3102/0034654318821286

CrossRef Full Text | Google Scholar

van den Hoven, J., and Vermaas, P. E. (2007). Nano-Technology and Privacy: On Continuous Surveillance Outside the Panopticon. J. Med. Philos. 32:3, 283–297. doi: 10.1080/03605310701397040

CrossRef Full Text | Google Scholar

van der Want, A. C., den Brok, P. J., Beijaard, D., Brekelmans, J. M. G., Claessens, L., and Pennings, H. J. M. (2018). Changes over time in teachers' interpersonal role identity. Res. Papers Educ. 33, 354–374. doi: 10.1080/02671522.2017.1302501

CrossRef Full Text | Google Scholar

Vogt, P., van den Berghe, R., de Haas, M., Hoffman, L., Kanero, J., Mamus, E., et al. (2019). “Second language tutoring using social robots,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 497–505. Available online at: https://ieeexplore.ieee.org/stamp.isp?arnumber=8673077 (accessed March 15, 2019).

Google Scholar

Wolfe, E., Weinberg, J., and Hupp, S. (2018). “Deploying a Social Robot to Co-Teach Social Emotional Learning in the Early Childhood Classroom,” in Adjunct Proceedings of the 13th Annual ACM/IEEE International Conference on Human-Robot Interaction. Available online at: http://socialrobotsinthewild.org/wp-content/uploads/2018/02/HRI-SRW_2018_paper_5.pdf (accessed October 17, 2019).

Google Scholar

Zembylas, M. (2003). Interrogating ‘teacher identity’: emotion, resistance, and self-formation. Educ. Theory 53, 107–127. doi: 10.1111/j.1741-5446.2003.00107.x

CrossRef Full Text | Google Scholar

Keywords: robot teachers, teachers' code of practice, teachers' roles/identity, digital versus human thought, fostering constructive thinking

Citation: Newton DP and Newton LD (2019) Humanoid Robots as Teachers and a Proposed Code of Practice. Front. Educ. 4:125. doi: 10.3389/feduc.2019.00125

Received: 05 June 2019; Accepted: 14 October 2019;
Published: 05 November 2019.

Edited by:

Sean McCusker, Northumbria University, United Kingdom

Reviewed by:

Jacqueline Joy Sack, University of Houston, United States
Karen Lenore Taylor, International School of Geneva, Switzerland

Copyright © 2019 Newton and Newton. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Douglas P. Newton, d.p.newton@durham.ac.uk

Download