Skip to main content

REVIEW article

Front. Robot. AI, 27 January 2022
Sec. Human-Robot Interaction
Volume 8 - 2021 | https://doi.org/10.3389/frobt.2021.720319

Helping People Through Space and Time: Assistance as a Perspective on Human-Robot Interaction

  • Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, United States

As assistive robotics has expanded to many task domains, comparing assistive strategies among the varieties of research becomes increasingly difficult. To begin to unify the disparate domains into a more general theory of assistance, we present a definition of assistance, a survey of existing work, and three key design axes that occur in many domains and benefit from the examination of assistance as a whole. We first define an assistance perspective that focuses on understanding a robot that is in control of its actions but subordinate to a user’s goals. Next, we use this perspective to explore design axes that arise from the problem of assistance more generally and explore how these axes have comparable trade-offs across many domains. We investigate how the assistive robot handles other people in the interaction, how the robot design can operate in a variety of action spaces to enact similar goals, and how assistive robots can vary the timing of their actions relative to the user’s behavior. While these axes are by no means comprehensive, we propose them as useful tools for unifying assistance research across domains and as examples of how taking a broader perspective on assistance enables more cross-domain theorizing about assistance.

1 Introduction

Smart wheelchairs navigating easily through crowded rooms, coaching robots guiding older adults through stroke rehabilitation exercises, robotic arms aiding motor-impaired individuals to eat a meal at a restaurant: these are all examples of research in areas as disparate as intelligent motion planning, rehabilitative medicine, and robotic manipulation that have been independently identified as being able to contribute to the development of robots that can do helpful things for people. This research has been fruitful, but has remained siloed as researchers from these various fields focus on the specific assistive tasks relevant to their own disciplines.

A lack of common structure in the field of assistive robotics makes it difficult for researchers to incorporate findings from other domains into their own work. For example, how does the relationship between a grocery stocking robot and the surrounding customers relate to the relationship between an airport guide robot and the surrounding crowd? Does a robot designed to autonomously declutter a room convey a similar sense of agency as a virtual robot suggesting an optimal ordering in which you should clean your room? Answers to these and similar questions would form a basis that would provide clarity for research in assistive robotics, but are currently difficult to determine due to the disparate nature of assistive robotics.

In this work, we identify a subset of common challenges and develop themes that begin a conversation about how assistance abstracted from specific problem domains and can be used to answer questions about assistance generally, thereby benefiting the entire field of assistive robotics. This would enable researchers to explore the underlying principles of assistive robotics and communicate them across domains. To start, we suggest that assistance is not a characteristic of a robotic system as it has been historically treated. Instead, assistance is a task-independent perspective on human robot interaction. Treating assistance as a task-independent perspective on HRI, we can group existing assistive research by its effect on three key axes: people (e.g., who is involved in the system and the roles they play), space (e.g., how the robot’s action affects the task), and time (e.g., when the robot performs its actions during the task).

This perspective considers an assistive system as an interaction in which a user and a robot forge a complex, asymmetric relationship guided by the user’s goals. This perspective is somewhat different from general HRI because the user is responsible for determining the interaction’s end goal while the robot acts in service of this goal. Similar to other collaborative settings, the human-robot pair is then tasked with performing subsequent actions to achieve the human’s goal, but unlike some collaborations, maintaining human autonomy is paramount. In this relationship, the robot has more agency and independence of action choice than a simple tool (i.e., the robot’s choice of action is not determined solely by the user), but it must defer to the user’s goal and independent actions.

We introduce three design dimensions with which roboticists can begin to reason about the assistive interactions of robots and humans. First, we discuss how the assistive robot’s role can be described with respect to the relationship it has with its user, for example, how it weighs priorities when there are multiple potential people it could assist. Second, we propose that an assistive robot’s role can be described in terms of how it operates in the execution space, that is, the space in which the robot has its primary effect. Finally, we propose that the same robot’s actions can be described in terms of the temporal space, that is, the duration and sequence of the actions. We support these dimensions by reviewing and grouping over 200 recent assistive robotics research papers.

By using assistance as a lens through which to analyze patterns that arise in assistive robotics, we hope to help designers of assistive robots more easily explore the design space and identify similar examples of past solutions, even across application domains. Additionally, we hope this work will motivate researchers to continue to refine this notion of assistance and its effects on human-robot interaction paradigms.

2 The Assistance Perspective

In the field of robotics, defining assistance can be tricky. In a broad sense, every robot is built to assist some person. Therefore, we do not attempt to separate assistive systems from non-assistive systems. Instead, we propose assistance as a particular perspective through which many robotic systems can be viewed. This perspective considers robotic agents that are autonomous in action but subordinate in goal to a human partner. Almost any robot system can, in theory, be viewed as assistive to someone, so we do not limit this scope. Rather, we explore what this analytic framework provides. This perspective clarifies particular design tradeoffs and trends general to assistive systems whatever their task domain. In this work, we describe several key design axes that arise when considering a robotic system as assistive and discuss implications these axes have on the interaction.

Before discussing these key design axes, we first formalize what we mean by a human-robot interaction, then provide a more detailed description of what it means to view assistance as a perspective. Next, we give a brief synopsis of previous attempts to characterize assistance and assistive robotics, and finally we give an overview of the remainder of this paper.

2.1 General Human-Robot Interaction

Before discussing assistance, we first sketch a general framework for human-robot interaction, which we draw broadly from multi-agent systems research. Formalizations of this problem can be found in previous literature (Jarrassé et al., 2012); here we only establish enough language to discuss assistance rather than requiring assistive systems to use this exact model.

First, we define a user uU as any person involved closely in the interaction. Typically, the user is in close physical proximity to the robot and provides explicit or implicit control signals to the robot. For example, a person teleoperating a robotic arm, getting directions from a social robot, or building a table with a robot helper, would be considered a user.

Next, the system has at least one robot rR. Canonically, a robot is defined as an embodied system that can sense its environment, plan in response to those sensory inputs, and act on its environment. An assistive robot may have a wide array of sensory, planning, and acting capabilities in order to be successful in its task. Some of these capabilities will be critical for the robot’s functioning (e.g., LIDAR to avoid hitting obstacles), while others will be critical for providing assistance to the user (e.g., a body pose recognition algorithm to identify the user’s location and gestures).

Finally these agents exist in a shared environment, each with its own internal state. These are described in totality by the mutual state sm = (sr, su, se) that defines the individual states of the robot, user, and environment. The robot and user both have goals gr, guG and can take actions arAr and auAu that affect their mutual state. By acting to update their mutual state, each agent has the potential to affect the other agent’s behavior resulting in an interaction between the two agents. Depending on the exact scenario, a task will be considered complete when one or more agents has achieved their goal.

2.2 Assistance as a Perspective on Human-Robot Interaction

Using this formulation, we can more carefully define assistance. Assistive systems interpret the robot as autonomous in its actions but subordinate in its goal. By giving the user the sole responsibility for setting both agents’ goals, the two agents now attempt to satisfy some shared goal g by reaching a mutual state where g is true: smg. This framing distinguishes assistive robotics from both traditional assistive technologies like a white cane, which has no control over its actions or goals, and traditional robotics, which develops systems with full control over their actions and goals. This framing gives rise to three key design axes: how assistive robots affect people through space and time. The discussion of these implications is the subject of the rest of this paper.

In HRI, as in assistive robotics, there is no requirement for there to be a single user. In fact, many assistive robotics scenarios involve more than one user. This becomes challenging, as it is the responsibility of one of these users to set the goal for the robot, but selecting which user has this responsibility may change the type of assistance the robot is able to provide. This is especially true when one user’s goals may conflict with another user’s goals. This highlights the importance of determining the roles of people when considering assistive robotics problems (Section 4).

Furthermore, since the user and robot are working to accomplish the same goal, the robot has freedom over its action space. As a baseline, the robot can assume the user would perform the task independently, without its aid. The robot can then choose its action space to align with how it can most beneficially assist the user over this baseline scenario. In addition to the standard strategy of directly manipulating the environment, the robot can assist by altering the user’s state space, encouraging the user to make more effective task progress. For example, a head-mounted augmented reality device displaying the optimal path for cleaning a room can assist the user without needing to physically interact with objects. Assistive scenarios allow more choice over the robot’s action space than would a general robot (Section 5).

Finally, in order to advance to the mutual goal state and complete the task, the user and robot each complete a sequence of actions (au1,,aut, ar1,,art, respectively) that transition the system to the desired goal state (sm=smg). Given that these actions occur in the mutual state, it is important that the user and the robot time their actions appropriately, so that they do not attempt to solve the same part of the task simultaneously, or worse, provide conflicting actions that result in undoing each other’s work. How to time actions is crucial to studying assistive robotics (Section 6).

Each of these axes presents researchers with decisions that result in critical trade-offs when designing an assistive robot. Throughout the remainder of this work, we will describe how assistive robots from different application domains fall along these axes.

By taking assistance as a perspective, it is our goal to provide an abstraction that allows for comparing systems from different domains to discover universal challenges that arise from robot assistance. We do not suggest that these axes describe a full assistive system or are a complete set of critical design axes. Rather, viewing assistance along these particular axes of people, space, and time enables some cross-domain comparisons and insights on its own, and it also demonstrates how assistance overall can benefit from a general examination.

2.3 Prior Categorizations of Assistive Robotics

By grouping assistive robots along the aforementioned design axes, we view assistance as an abstract concept that illuminates parallel research problems across different application domains. We build on previous literature which categorizes assistive robotics within particular application domains, for example socially assistive robots (Fong et al., 2003; Matarić and Scassellati, 2016), joint action (Iqbal and Riek, 2019) and physically assistive robots (Brose et al., 2010).

Some work does try to describe assistance as a whole. Jarrassé et al. (2012) categorizes joint action between dyads by positing a cost function for each agent defined on each agent’s task error and required energy. Among categories in which both agents are working together towards the same goal, the paper specifies collaboration between two equal peers, assistance when one agent is subordinate to another, and education in which the educator assists the partner but moderates its own effort to encourage increasing effort from its partner. We take this core idea of assistance as subordination and build on it in our definition of the assistance perspective.

Most similar to the current work, perhaps, is the accounting given in Wandke (2005). This overview of assistance in human-computer interaction notes that defining assistance as any system that provides some benefit to the user would include nearly all technical artifacts. Therefore, the paper restricts its attention to systems that bridge the gap between a user and the technical capabilities of the system due to the user’s unfamiliarity with the system or excessive burden of use. In contrast to this approach, our work presents assistance as a perspective rather than a definition; it could in principle be applied to any technical artifact but may only be useful for some. Additionally, this definition of assistance focuses on how assistive systems correct a deficiency in a user’s understanding of the system or capability to use it. In contrast, our definition of assistance as a perspective admits beneficial actions from the robot of all sorts, not just those repairing the user’s ability to use a system.

2.4 Overview of This Paper

By defining assistance as a perspective, we provide language to discuss ideas about assistance from different domains. This will allow researchers from various areas of assistive robotics to come together to illuminate and discuss common research challenges. Additionally, researchers can make design decisions about how the assistive robot affects people in space and time by using this framework to consider similar approaches to problems from disparate task domains. In the remainder of this paper, we discuss these design axes and explore their implications through a review of existing assistive robotics literature. Section 3 describes our method for collecting these papers Section 4 describes the people design axis, Section 5 describes the space design axis, and Section 6 describes the time design axis. These axes are summarized in Table 1. We then conclude the paper with a discussion over the implications of this work.

TABLE 1
www.frontiersin.org

TABLE 1. Assistive robots can be explored along three key axes: how the assistive system thinks about additional people, what part of the mutual state aligns with its action space, and at what time it executes its actions during a task.

3 Methods

To develop this taxonomy, we conducted a literature review of recent papers on assistive robotics.

3.1 Initial Search

First, we hand-selected 74 papers from the last 5 years of the annual Human Robot Interaction conference (HRI 2016–2020). From these papers we generated an initial set of search terms by aggregating titles, abstracts, and author generated keywords using the R (R Core Team, 2017) package litsearchr (Grames et al., 2019). Using these aggregated keywords, we formed an initial search query.

3.2 Refined Search

We ran the initial search query on the Web of Science. This search yielded approximately 1,500 papers. We repeated the keyword aggregation on this set of keywords, and then hand-selected new keywords from among them based on their prevalence and relevance to assistive robotics. We repeated the Web of Science query with this refined set of keywords, which yielded, again, approximately 1,500 papers. The refined search was run on 29th January 2021. We included a paper based on whether the following statement evaluated true based on a search of the entire text of the paper.

((assist NEAR robot)

OR (collab NEAR robot))

AND (human OR people OR person OR subject OR user OR “elderly people” OR “older adults” OR “natural human” OR “stroke patients” OR “healthy subjects”)

AND (“human-robot interaction” OR “human-robot collaboration” OR “robot interaction” OR “robot collaboration” OR collaboration OR hri OR “human robot collaboration” OR “physical human-robot interaction” OR “human robot interaction” OR “machine interaction” OR “human-machine interaction” OR “human interaction”)

AND (“collaborat task” OR “assembly task” OR “social interaction” OR “assembly process” OR “shared workspace” OR “manipulation task” OR “human safety” OR “daily living” OR “service robot” OR “production system” OR “safety standard” OR “mobile robot” OR “assisted therap” OR “collision avoidance” OR “object manipulation” OR “collaborative assembly” OR “socially assistive” OR “assistive *robot” OR “social robot” OR “teleoperat”))

3.3 Paper Selection

Starting from the refined Web of Science results, we filtered out all papers from venues with fewer than two related documents and papers that were older than 5 years, with a small exception. In an attempt to keep papers with significant contributions to the field, papers older than 5 years were kept if they had more than 10 citations. This process left approximately 465 papers. Each paper in this set was then manually checked for relevance by reading the title and abstract. To be included, we required the paper to include both 1) an assistive interaction with the user and 2) a system capable of taking actions. This step mainly removed papers focused on robotic system development or perception improvements rather than assistance itself. This yielded 313 papers, each of which was again reviewed against the aforementioned exclusion criteria. The entire search process yielded over 200 papers that we classified into our taxonomy.

4 People

In Section 2, we described assistance with single users. This description works well for situations that have only one user, which is common in laboratory settings. In realistic settings, however, a robot will typically encounter more than one person in the course of completing their task. These other people can act in a variety of different roles within the interaction. In this section, we explore themes in how assistive interactions incorporate more people into the general human-robot dyad (Figure 1).

FIGURE 1
www.frontiersin.org

FIGURE 1. An assistive system can treat people beyond a single user as additional targets of assistance or as interactants, and either choice introduces particular complications into the assistive dynamic.

4.1 Terminology

The simplest approach a system can take towards other people is simply to ignore them completely. While this case tends not to be analyzed explicitly, it is implicit in many systems. This strategy can be appropriate, especially during situations in which encountering additional people is rare. When working with other people, though, the robot could implicitly account for additional people by relying on its primary user to provide controls that appropriately consider other users. Finally, a robot might intentionally downplay its relationship to additional people when accounting for them would conflict with its primary user’s goals, such as an emergency response robot that ignores standard social navigation behaviors to reach its patient as fast as possible.

When the system does choose to reason about other people, its treatment of them can be determined by dividing them into two different roles: the target of assistance, whose goals are of equivalent importance as other targets; and interactants, who require the attention owed to any other person as explored throughout human-robot interaction research but don’t have their goals privileged by the robot.

A target of assistance derives directly from the definition of assistance: an assistive scenario must support the goals of at least one person. Consider a scenario in which a person who has a spinal cord injury uses a robotic arm to aid them in eating a meal with friends at a restaurant. In this scenario, the arm’s user sets the goal for the robot: to bring food from their plate to their mouth so they can consume it.

The second role a person can play in an interaction is that of interactant. An interactant is any other person involved in the scenario who is not a target. Continuing the previous example, the people who are out to dinner with their robot-operating friend are interactants. They have no direct bearing on the robot’s goal, but they are potentially affected by the robot’s actions and may require some design effort for the system. For example, the robot may have to avoid collisions with them during its operation. While the robot’s relationship to interactants is not assistive, the presence of a specific target of assistance can affect how the robot interacts with others.

When considering assistive systems that involve more than a single target, the system must determine in which of these roles to consider the additional people. These two roles are not mutually exclusive; there can be more than one of each in a given scenario. Additionally, both targets of assistance and interactants can give explicit control input to the robot. Designating people as additional targets or as interactants brings about different challenges for the assistive system.

4.2 Additional Targets of Assistance

One challenge arising from a single robot having multiple targets of assistance is that the goals issued by these targets can conflict with one another. In the eating scenario, the robot might instead be assisting everyone present, perhaps by both feeding its user and serving food to other people at the table. Here, the robot is presented with a conflict: how should it choose to prioritize the goals given by its targets and reconcile differences between them?

This can be especially challenging in contexts such as education. An educational robot might consider the teacher as its target and work to enrich a student according to a mandated curriculum. It can also consider the student as its target and try to engage the student with concepts that are interesting to them regardless of the curriculum. Much research in this area aims to make the content proposed by the teacher more enjoyable by developing robotic behaviors that are meant to keep the student engaged. Leite et al. (2015) designed a robot puppet show to engage young learners in an educational story, Martelaro et al. (2016) designed a robot that encourages students to develop trust and companionship with their tutor, and Christodoulou et al. (2020) designed a robot to give nonverbal feedback to students in response to quiz answers to keep them engaged with the testing material. In contrast, Davison et al. (2020) took a different approach and developed the KASPAR robot to look like another student and deployed it in unsupervised interactions that were totally motivated by the student. In this way, they allowed the student to approach the learning material voluntarily, giving the student more agency to learn what they desired and at their own pace.

This dilemma can again be seen in therapeutic contexts, where a robot must reconcile the goals of the doctor and the patient. Robots can increase a patient’s motivation to do mundane, repetitive or uncomfortable exercises through the use of a robot that does the exercise alongside the patient (Tapus et al., 2007; Schneider and Kummert, 2016). Alternatively, a robot could be used to give the patient more agency and independence over their own treatment by helping someone independently practice meditation (Alimardani et al., 2020), do independent cognitive behavioral therapy (Dino et al., 2019), or home therapy for autism (Shayan et al., 2016).

A full analysis of these interactions treats both the teacher and the student, or both the therapist and the patient, as targets of assistance with goals that often align but are not identical. This alignment mismatch can often lead to ethical challenges, which are even more fraught when the capabilities, agency, and relative power of the possible targets vary. While there is no general technical solution, this language encourages designers to explicitly enumerate the multiple targets of the assistance and to reason directly about conflicts in their goals.

4.3 Additional Interactants

On the other end of the spectrum are robots that treat additional people in the system as interactants. Robots designed with this relationship in mind prioritize the goals of its target of assistance. In our assisted eating scenario, the robot may need to follow basic social norms around the other diners by avoiding collisions with them, but it does not privilege their goals.

This relationship is typically used in scenarios where some figure of authority (e.g., a teacher or a therapist) needs to relieve themselves of some amount of work. For example, a teacher could employ a robot to teach half of their class in order to reduce the student-to-teacher ratio for a particular lesson (Rosenberg-Kima et al., 2019), or even have the robot teach the class alone if they need to finish other work (Polishuk and Verner, 2018). In this way, the teacher is the target of assistance, while the students are treated only as interactants. The robot should be able to teach competently enough to achieve the teacher’s goals, but the students’ preferences about using the robot are not of direct concern.

Similarly in emotional or physical therapy a robot can be employed to lead group sessions in lieu of a doctor, who may have more classes than they can handle (Fan et al., 2016; Ivanova et al., 2017). Alternatively, the robot may be better at collecting certain information than the user. For example a patient who has suffered a stroke may be unable to emit certain social signals expected during social interaction. This could negatively affect a doctor’s opinion of this patient, a problem that could be circumvented by having a robot collect this information (Briggs et al., 2015; Varrasi et al., 2019). The patient here, however, is not asked whether they may prefer the social interaction regardless of the implicit bias the doctor may possess.

These systems don’t generally follow an assistance dynamic with interactants, rather, general human-robot interaction research applies. However, the fact that the system has a target, even if the target is not present, can change the robot’s behavior: a robot acting as a proxy for a specific teacher may have different behavior than one employed as a general-purpose robot, which might have bearing on how the general human-robot interaction problem is resolved.

4.4 Combinations of Roles

If an assistive robot has multiple additional people present in the interaction, it can choose to consider some of them as targets and others as interactants. In this relationship, our assisted eating robot might treat both the user and the companion seated next to them as targets of assistance, while those eating companions seated further away from the user are treated as interactants. In this way the robot can carefully maintain the goals of multiple people in proximity to the robot. This framework can allow for more complex robot behavior near to the user without the additional complication of handling everyone else at the table.

Another example would be a robot that participates in a collaborative scenario with multiple human actors, some of whom serve as both targets of assistance and interactants, while others are only interactants. For example, consider a local repair-person who needs help from a remote repair person. To give instructions, the remote repair person can use a robot to highlight the parts of the environment they are discussing (Machino et al., 2006). In this way, both actors are interactants in the scenario, but only the local repair person is the target of assistance.

4.5 Implications

These various relationships clarify the design choices involved in developing an assistive system. A particular task, such as assistive eating, does not require a particular relationship between the robot and the people it encounters. Rather, how a robot relates to these people is a design decision that will have implications as to how the task is completed.

The choice of roles affects how assistive systems with multiple people are evaluated. When treating the user and their eating companions all as targets of assistance, the robot would need to verify that it is helping them all in achieving their independent goals. This type of evaluation may be difficult to actually measure and nearly impossible to succeed on, as the companions have conflicting interests from the user. Identifying what type of relationship the robot should have with its users can help researchers disambiguate otherwise similar systems to determine which evaluations are important.

The choice of which roles to use may also have implications on how much autonomy to imbue in the robot. A robot that balances the goals of many people may require complex sensing, modeling, and planning to carefully moderate between them. A simpler robot might delegate this goal moderation problem to its user and treat additional people as interactants or ignore them entirely. This system gives the target more control over the goals, but requires additional input from the user. If the robot maintains full autonomy in this scenario, but it does not plan for other people’s goals, it may in fact endanger them by running into them where another system would have chosen to avoid them. These ideas show how the choice of relationship between the robot and the people it encounters throughout a task can impact the design of the final system.

5 Space

Assistive robotic systems can perform similar tasks by acting in different action spaces. We show in Section 2 how to represent the mutual state during the interaction as the state of the user su, the state of the robot sr, and the state of the environment se. In general, a user employing an assistive robots is aiming to make some alteration to se. Since the robot is tasked with aiding the user and not directly accomplishing this state alteration, the robot can assist the user by making a change to any part of the mutual state that makes it easier for the user to accomplish their goal. In this maner, a robot can provide many different types of assistance when helping to complete the same overall task.

Consider an assistive eating robot. The robot and its user sit at a table across from one another, with a plate of food between them. The user’s goal is to eat the food. The robot can provide assistance by performing a variety of different actions: it can act on the user’s mental state by projecting a light onto a morsel of food that would be easy to grab next, it can change the physical state of the user by guiding their hand into an appropriate position, or it can change the environment by picking up the morsel and feeding it to the user. All of these action spaces apply to the same task and the same goal; what differs is in what way the user would most benefit from assistance.

To illustrate this point more broadly, we provide a review of recent assistive robotics literature, grouped by whether the robot is acting on the user’s mind, user’s body, or environment (Figure 2).

FIGURE 2
www.frontiersin.org

FIGURE 2. A robot can provide assistance by acting in several different action spaces. It can assist by giving information to the user, adjusting the user’s body, or changing the environment to help complete the task.

5.1 Environment

One straightforward assistive robot is one that simply completes a task for the user. For example, research has focused on autonomous butler robots (Srinivasa et al., 2010, 2012) that perform tasks such as cooking and cleaning. Such a robot assists a user by navigating around the apartment picking up misplaced items such as dirty laundry and dishes and placing them in appropriate locations such as a laundry hamper or dishwasher. The robot provides assistance by directly changing the environment. To meet the minimal requirement of providing assistance (i.e., delivering some benefit to the target of assistance), the robot must shift the environment from an undesirable state configuration to a more desirable one.

Much research surveyed here assists users in exactly this way: by providing autonomous assistance through environmental state manipulations. Researchers have explored how a user can command a robot to organize a messy room (Mertens et al., 2011; Cremer et al., 2016; Koskinopoulou et al., 2016; Pripfl et al., 2016; Jensen et al., 2017), fetch misplaced or distant items (Iossifidis and Schoner, 2004; Unhelkar et al., 2014; Huang and Mutlu, 2016; Wieser et al., 2016), or even perform more specialized tasks autonomously (under the direction of the user) such as assisted eating (Canal et al., 2016) and other tasks of daily living (Nguyen and Kemp, 2008), search and rescue (Doroodgar et al., 2010), welding (Andersen et al., 2016a), or other industrial tasks (Mueller et al., 2017). Assistive tasks performed autonomously at the request of a user through environmental manipulation can provide several benefits. This method of task execution requires little user input, which makes it efficient for users who prefer not to spend time on chores and beneficial for users who may not be able to accomplish the task at all.

Environmental assistance is not solely the domain of autonomous robots, however. Collaborative robots, specifically in tasks where the user and the robot take independent actions that jointly manipulate the environment towards a mutual goal state, also perform environmental assistance. Examples of such systems include collaborative cleaning (Devin and Alami, 2016) and assembly (Savur et al., 2019; Zhao et al., 2020). A robot working collaboratively with a user can improve its efficiency by modeling the user’s behavior, for example by determining specific poses to hold an object in to facilitate fluid collaboration during assembly (Akkaladevi et al., 2016) or by anticipating and delivering the next required item in assembly (Hawkins et al., 2013, 2014; Maeda et al., 2014) or cooking (Koppula et al., 2016; Milliez et al., 2016), or by providing help under different initiative paradigms during assembly (Baraglia et al., 2016). Collaborative environmental assistance can also be used to perform joint actions with a user, such as in handovers (Cakmak et al., 2011; Kwon and Suh, 2012; Grigore et al., 2013; Broehl et al., 2016; Canal et al., 2018; Cserteg et al., 2018; Goldau et al., 2019; Lambrecht and Nimpsch, 2019; Nemlekar et al., 2019; Newman et al., 2020; Racca et al., 2020), where the goal is to transfer an object from the robot’s end effector to the user’s hand; or co-manipulation (Koustoumpardis et al., 2016; Nikolaidis et al., 2016; Schmidtler and Bengler, 2016; Schmidtler et al., 2016; El Makrini et al., 2017; Goeruer et al., 2018; Rahman, 2019b; DelPreto and Rus, 2019; Rahman, 2020; Wang et al., 2020), where the aim is for the user and the robot to jointly move an object to a specified location or provide redundancy in holding an object in a joint assembly task (Parlitz et al., 2008) or safety critical situation such as surgery (Su et al., 2018).

So far, all examples of environmental assistance have been provided by standalone robots, commonly taking on a humanoid or robotic arm morphology. These robots affect the environment by changing their own configurations first (e.g., using a robot arm to pick up an object). As such, they are considered decoupled from the environment. Robots can also be designed to be coupled with the environment; in these examples, it is hard to distinguish between the robot’s state and the environment state. These robots often take on more conspicuous yet specialized morphologies, such as a mechanical ottoman (Sirkin et al., 2015; Zhang et al., 2018). For example, a robotic suitcase can assist an airline passenger by following them through an airport (Ferreira et al., 2016) and manipulating the user’s sense of trust by moving across various proxemic boundaries. A set of robotic drawers containing tools can assist a user in completing an assembly by proactively opening the drawer containing the next required tool (Mok, 2016), and it can also manipulate a user’s enjoyment in completing the task by employing emotional drawer opening strategies. Environmentally coupled robots can be designed to be “invisible,” (Sirkin et al., 2015) or to be modifications to an existing environment or object. Moving away from more traditional robot appearances may mitigate any negative effects from interacting with a robot.

Other approaches include shared control which separates the responsibilities of the user and the robot during the task. For example a teleoperated surgery robot can hold a patient’s skin taut so that the surgeon can focus on performing incisions (Shamaei et al., 2015). A telepresence robot (Kratz and Ferriera, 2016) can automatically avoid obstacles during navigation (Acharya et al., 2018; Stoll et al., 2018) or automatically rotate its camera to keep a desired object within view (Miura et al., 2016). Finally, a remote, teleoperated space robot can perform as much of a task as is possible before it pings the space station for human intervention (Farrell et al., 2017). By having the robot configure itself according to some of the task requirements, the robot allows the user to focus on other parts of the task.

5.2 Human Body

While assistance applied directly to the environment can solve a wide variety of tasks, some tasks require alternate strategies. One such scenario is when some change to the user’s physical state is required to perform the task. For example, consider a robot designed to assist a user who has difficulty bathing themselves. While it is technically possible for that robot to transform the environment by bringing a bathtub to the user, this is obviously impractical. The robot can instead transform the user’s state by bringing them closer to the bathtub (Dometios et al., 2017; Papageorgiou et al., 2019). This strategy of moving a user to assist them is similar to autonomous environmental manipulation, but now the user is being manipulated instead of the environment. This strategy results in limited agency to the user, and is typically only employed when the user has minimal ability to complete the task themselves.

In cases where users can perform some aspects of the task, a robot can also assist by supplementing a user’s existing abilities. For example, if a user can walk but has difficulty balancing or navigating, a smart walker can be utilized to help the user navigate between locations (Papageorgiou et al., 2019; Sierra et al., 2019). Similarly, if a user has some control over their limbs, an exoskeleton robot can be used to provide extra support for day-to-day usage (Baklouti et al., 2008; Lim et al., 2015; Choi et al., 2018; Nabipour and Moosavian, 2018) or in therapeutic scenarios in order to help a user strengthen weakened muscles (Carmichael and Liu, 2013; Zignoli et al., 2019).

In addition to aiding in task execution, physical user state manipulation can also be used to assist in planning, such as when a user’s sensing capabilities are diminished. For example, a visually impaired user may wish to solve a Tangram puzzle but must pick up and feel each piece individually. To provide assistance to the user, a robot could sense the puzzle pieces and determine which pieces are viable for the next step of assembly. The robot can then physically guide the user’s hand to this piece allowing the user to solve the puzzle (Bonani et al., 2018). This is an example of human body state manipulation. Instead of manipulating the environment to solve the task, the robot instead changes the user’s physical state configuration in order to better position them to solve the task.

Robot assistance that acts on a user’s body can also be done by using the resistance of the robot’s own joints. A user kinesthetically manipulating a robot arm, for example, may not know the exact path the arm should travel in order to complete a co-manipulation task. The robot can change its admittance or transparency such that it becomes easier (Jarrasse et al., 2008; Li et al., 2015; Lee and Hogan, 2016; Mariotti et al., 2019; Muthusamy et al., 2019; Luo et al., 2020) or more difficult (Bo et al., 2016; Kyrkjebo et al., 2018; Cacace et al., 2019a,b; Wu et al., 2020) to move as the robot’s end effector deviates from a known, low-cost path. This idea can also be applied to full-scale robots, allowing a user to navigate a robot from one point to another by guiding it as if it were another human (Chen and Kemp, 2010) or to use the stiffness of the robot’s arm as a support while standing up (Itadera et al., 2019). Admittance control as a body state manipulation allows the user to have a high degree of control when operating the robot, but allows the robot to provide information about which parts of the environment are better to traverse by altering the stiffness of its joints. This strategy can also be used in therapeutic settings, where a patient recovering from a stroke can be given an automatic, smooth schedule of rehabilitation exercises as the robot changes its admittance depending on the force feedback it receives from the user (Ivanova et al., 2017).

5.3 Human Brain

The final location of assistance we identify is the user’s mental state. These robots assist by transforming the user’s understanding of the world in a helpful way. One common method is for the robot to communicate unknown environmental information to the user. For example, a robot can play particular sounds as it completes its tasks so that a user can track it more easily (Cha et al., 2018). A robot can also describe the local environment for a visually impaired user in a navigation task, enabling them to create a semantic map of the environment (Chen et al., 2016). Similarly, a robot can provide a visual signal to designate objects it intends to interact with so the user can avoid them (Machino et al., 2006; Andersen et al., 2016b; Shu et al., 2018), areas where the robot expects to move so the user can stay away (Hietanen et al., 2019), or areas or paths that the robot thinks the user should take to complete a task in an optimal fashion (Newman et al., 2020). In an emergency scenario, a robot can visually indicate the direction of a safe exit (Robinette et al., 2016). Finally, a robot can provide haptic feedback to indicate when to turn in a navigation task (Moon et al., 2018; Li and Hollis, 2019). Robots that provide alerts like these assist by communicating information about the task or the environment directly to the user so that the user can effectively perform the task.

Robots can also assist in the mental state domain by adopting social roles. Generally, these robots are designed to perform socially beneficial functions similar to those that a human would provide, such as a robot that takes the role of a customer service agent (Vishwanath et al., 2019) or a bingo game leader (Louie et al., 2014). In educational settings such as one-on-one tutoring (Kennedy et al., 2016; Fuglerud and Solheim, 2018; Kanero et al., 2018; van Minkelen et al., 2020) and classroom teaching (Kennedy et al., 2016; Ramachandran et al., 2016; Westlund et al., 2016; Polishuk and Verner, 2018; Ono et al., 2019; Rosenberg-Kima et al., 2019), a robot can deliver lectures in a similar manner to a human teacher. In therapeutic and medical settings, a robot can administer routine medical surveys (Varrasi et al., 2019) independent of the doctor’s social biases (Briggs et al., 2015), provide therapy sessions for routine cognitive behavioral therapy (Dino et al., 2019) or physical therapy (Meyer and Fricke, 2017), and perform other general therapeutic tasks (Agrigoroaie et al., 2016; Fan et al., 2016; Salichs et al., 2018; Alimardani et al., 2020). Finally, a robot’s assistance can vary based on its social role, such as a concierge robot performing different social behaviors when responding to children or adults (Mussakhojayeva et al., 2017), an advice-giving robot providing explanations when a user’s behaviors become non-optimal (Gao et al., 2020) or a robot that gives cooking advice varying its strategies so that the advice is more readily received (Torrey et al., 2013).

Instead of performing a procedure itself, a robot can assist a professional when affecting a user’s mental state. When a therapist is unable to be physically present with a child, for example, a parrot robot can be employed in the home to entice a child with autism to practice skills learned during a therapy session (Shayan et al., 2016; Bharatharaj et al., 2017). During therapy with agitated patients, introducing a pet-like PARO robot can induce mental states more conducive to effective therapy (Shibata et al., 2001; Sabanovic et al., 2013; Chang and Sabanovic, 2015; Shamsuddin et al., 2017). A child-like robot can allow a young patient to practice social skills with a partner more akin to a peer than the therapist is (Goodrich et al., 2011; Kim et al., 2014; Taheri et al., 2014; Ackovska et al., 2017; Nie et al., 2018). Similarly, a child-like robot can assist a teacher by reinforcing a student’s desire to self-engage in educational material, something students may be more likely to learn with a peer than a teacher (Wood et al., 2017; Davison et al., 2020), or increase a user’s ability to recall a story by acting out portions of it (Leite et al., 2015).

Since robot actions are sometimes interpreted socially and as being intentional, robots can select their actions to influence the user’s mental state. For example, predictable and legible motion strategies that indirectly communicate a robot’s goals are readily interpreted by people (Dragan et al., 2013). These same strategies can be used in collaborative tasks to indirectly show the robot’s goal to the user (Bodden et al., 2016; Faria et al., 2017; Zhu et al., 2017; Tabrez et al., 2019). Robots can also mimic human nonverbal behaviors like deictic eye gaze and pointing gestures to indicate task-relevant objects during collaborative tasks (Breazeal et al., 2004; Fischer et al., 2015) or to assist in completing mentally taxing tasks (Admoni et al., 2016; Hemminghaus and Kopp, 2017).

Similarly, robots can use their behavior to suggest their internal emotional state. This strategy can increase rapport, fluidity and reception of a robot’s assistance through emotive motions (Mok, 2016; Terzioglu et al., 2020) or giving the user feedback regarding a task’s success through facial expressions (Reyes et al., 2016; Rahman, 2019a; Christodoulou et al., 2020). Using socially meaningful actions enables assistive robots to communicate with the user efficiently and fluidly.

Robots can also use social behaviors to induce specific, beneficial emotional responses from a user. By mimicking human nonverbal behaviors, robots can use their eye gaze to induce social pressure on a user to work more efficiently (Riether et al., 2012) or to soften its own dominance to allow for better teamwork (Peters et al., 2019). Assistive robotic gestures can also increase feelings of openness in people who are discussing negative experiences (Hoffman et al., 2014) and motivation in users during medical testing (Uluer et al., 2020), in users during physical exercise (Malik et al., 2014; Schneider and Kummert, 2016; Malik et al., 2017), and in stroke patients performing rehabilitative exercises (Tapus et al., 2007). Since people generally view robotic gestures as intentional, robots can use these gestures to induce mental states that assist the user in performing a task.

In addition to nonverbal communication strategies, robots that are capable of speech can converse with users to induce beneficial mental states (Knepper et al., 2017). Robots can use speech to change the content of the conversation (Gamborino and Fu, 2018) or to answer a question about the surrounding environment (Bui and Chong, 2018). Robots can use dialogue to gather information during collaborative teleoperation (Fong et al., 2003), to engender trust in an escape room (Gao et al., 2019), or to facilitate collaboration between two targets of assistance (Strohkorb et al., 2016). Robots can also talk about themselves to influence a user’s view of themselves. For example, tutoring robots for children can make vulnerable statements about themselves to increase trust with the student and student engagement (Martelaro et al., 2016). Similarly, a robot in a group setting can facilitate group trust by leading with vulnerable statements about itself, so that its teammates feel more comfortable sharing their own vulnerabilities. This effect can cascade as more group members explain their own failures, console each other, and laugh together (Sebo et al., 2018). Failing to deliver assistance in contexts where the robot is expected to provide assistance can have deleterious effects on a user’s mental state, causing users to mistrust the robot and harm their relationship and rapport (Kontogiorgos et al., 2020; Rossi et al., 2020).

Beyond focusing on specific content of speech, conversational robots can further affect the user’s mental state in the way they speak. Robots can perform back-channelling to give the appearance of active listening (Birnbaum et al., 2016; Sebo et al., 2020), or give informative feedback to improve task performance (Guneysu and Arnrich, 2017; Law et al., 2017; Sharifara et al., 2018), a user’s self-efficacy (Zafari et al., 2019), or their motivation (Mucchiani et al., 2017; Shao et al., 2019). Robots can choose to only interrupt a distracted user at appropriate times (Sirithunge et al., 2018; Unhelkar et al., 2020). A robot can also change its tone to project an emotion such as happiness to improve the user’s mood and task performance (Mataric et al., 2009; Lubold et al., 2016; Winkle and Bremner, 2017; Rhim et al., 2019). Finally, a robot can combine these qualities with the content of the conversation to change the user’s perception of the robot’s social role (Bartl et al., 2016; Bernardo et al., 2016; Monaikul et al., 2020). Specifically, a robot can act as a student during a tutoring session to induce different learning techniques in a human student (Sandygulova et al., 2020).

Shared control, especially when an input controller (e.g., a joystick) limits the number of input degrees of freedom (Aronson et al., 2018), can also be made easier for user’s by providing assistance that alters the user’s mental state. A robot arm can assist its user by maintaining more easily controllable state configurations (Javdani et al., 2015; Till et al., 2015; Vu et al., 2017; Aronson et al., 2018; Newman et al., 2018) or by optimizing which degrees of freedom the user can control at any given time (Herlant et al., 2016). This idea can be extended to supernumerary arms that provide users with an additional appendage but are difficult to control (Nakabayashi et al., 2018; Vatsal and Hoffman, 2018), teleoperating robotic arms through electromyography (Noda et al., 2013; Pham et al., 2017) or similar sensing devices (Muratore et al., 2019), or humanoid robots (Lin et al., 2019; Zhou et al., 2019). Additionally, a robot might be able to enter environments that are unavailable to a user, allowing the user to teleoperate the robot in these environments, and effectively extend their reachable environment (Horiguchi et al., 2000). These strategies all effectively alter the user’s mental state by decreasing the burden of user communication.

Finally, another strategy for robots to assist a user is by transforming the robot’s own physical configuration into one that is more amenable to task completion. This approach is useful in collaborative scenarios where the robot and user may collide. To avoid this problem, robots can decrease their operating velocity when working in close proximity to users (Araiza-Illan and Clemente, 2018; Rosenstrauch et al., 2018; Svarny et al., 2019) or take paths or actions specifically designed to reduce the likelihood of a collision (De Luca and Flacco, 2012; Hayne et al., 2016; Liu et al., 2018; Nguyen et al., 2018). Similar to shared control, these strategies to assist the user decrease the user’s cognitive burden of planning in the task. By taking responsibility for collisions, a robot can effectively alter its own actions so that the user can be less concerned with monitoring and modelling a robot’s behavior and can concentrate on completing their portion of the task.

5.4 Implications

Choosing which action space the robot should act in is a crucial decision for robot designers. To aid users in room cleaning, for example, researchers have developed robots that alter the environment by directly picking up misplaced objects, while others have developed augmented reality solutions that provide assistance in the user’s mental space by showing them routes that, if followed, would lead to the shortest time spent cleaning. Realizing that a given task can be solved by acting in any part of the state allows researchers to develop novel solutions to problems that have historically been restricted to robots that act in a single state.

This realization, however, means that determining the robot’s action space is not simply determined by the task that the robot is being built to solve. Instead, a roboticist must carefully consider the capabilities of the users for whom they are designing the robot. The choice of how the robot acts must be tuned to the needs of the user, and it has broader implications on the user’s sense of agency and trust in the system. This separation of robot action spaces enables designers to compare robots from different domains that have similar action spaces and develop better assistive solutions.

6 Time

The third key design axis we present concerns how assistive robots coordinate the timing of actions with the targets of their assistance. Consider an assisted eating scenario. A robot might only offer food when given an explicit trigger by the user, or it can monitor the user’s behavior to decide when to initiate the action itself. We categorize the timing of assistive actions as reactive, proactive, or simultaneous. Reactive robots act only when given explicit commands. Proactive robots use predictive models or other approaches to understand the world to initiate their actions without an explicit command. Robots acting simultaneously occur in collaborative settings, during which the robot continuously monitors the user for both explicit and implicit information to direct its actions. Choosing how to time the robot’s behavior can change the difficulty of the task and how users react to the robot’s assistance (Figure 3).

FIGURE 3
www.frontiersin.org

FIGURE 3. A key axis in assistive robotic systems concerns what type of cue leads to the robot taking actions. Robots can be reactive and respond to explicit input only, be proactive and interpret the general task state to choose to act on their own, or collaborate closely with the user by acting simultaneously with them.

6.1 Reactive

Reactive assistance occurs when the assistive action is triggered by an explicit command. Consider a teleoperated robotic arm developed for assistive eating (Javdani et al., 2015; Aronson et al., 2018; Newman et al., 2018). In these studies, a user uses a two-degree of freedom joystick to control a seven-degree of freedom robot arm and pick up a morsel of food from a plate. Direct control of this robot entails only moving the robot’s end-effector while the user is engaging the joystick. The user might also give commands at a higher level of abstraction, perhaps by pressing one button to request food and another for water.

Reactive robots can also respond to more task-specific, contextual triggers. In Canal et al. (2018), an assistive robot helps a user to put on their shoes. This interaction is modeled as a complicated handover problem, where the user must have their foot properly positioned and apply enough resistance that the shoe remains on the foot. In this work, the robot responds to a gesture performed by the user through their foot. When they move their foot in the specified way, the robot knows that it is an acceptable time to place the shoe on their foot.

In general, reactive systems give the user more control over the robot and therefore agency in the overall interaction. Additionally, the robot does not generally need sophisticated models of the task, since it can rely on explicit input from the user. This simplicity means that the robot tends to be less sensitive to the particular task or domain, as it relies on the user to adapt the task to the robot’s capabilities. However, this additional control requires the robot’s user to spend more time and effort on controlling the robot, which can distract from other tasks. Controlling a robot at this level may also require significant training, as the robot’s capabilities may not clearly match the requirements of the task. The control burden grows as the user must explicitly command the robot to begin an interaction (Baraglia et al., 2016), and requiring additional control complexity, such as adding modal control to teleoperation, can be cognitively taxing and slow down progress in the task (Herlant et al., 2016). Furthermore, requiring the user to explicitly cue the robot to act reduces collaborative fluency, which is undesirable as collaborative fluency is a positive attribute that has shown to increase the user’s perceived quality of the interaction (Hoffman et al., 2014) and decrease the time spent during interactions (Huang and Mutlu, 2016).

6.2 Proactive

Proactive assistance occurs when the robot predicts that an action would fulfill the user’s goals and takes that action without explicit prompting. For example, in assisted eating, the robot may anticipate a user’s thirst after eating and choose to reach for the glass of water before receiving explicit input. The robot relies on a model of the task and user behavior to estimate what the user would want next. Proactive assistance generally improves the smoothness of interactions, as the assistance target does not need to spend time training or cognitive load to provide explicit instructions to the robot. However, this type of assistance is dependent on the model used to cue its actions, so the added complexity may make the system less reliable.

Consider again the task of operating a high degree of freedom robot using a low degree of freedom input device. Instead of using explicit signals from the user, Herlant et al. (2016) designed a robot that can proactively switch modes. In a simulated navigation task, a user drives a robot whose movement is restricted to exclusively moving either vertically or horizontally through a two-dimensional maze. The robot uses a model of the environment to determine whether horizontal or vertical motion is optimal given the robot’s current position. The robot can then switch the mode proactively, allowing the user to simply direct the robot to move, speeding up the overall interaction time and removing the cognitive burden seen in reactive mode-switching.

Another way a robot can assist proactively is by building a model of the user to infer the task goal before it has been expressed. For example, a robot can predict the next fruit that a customer wants to add to their smoothie (Huang and Mutlu, 2016). Before the user explicitly requests this ingredient, the robot can prepare to grab that ingredient, increasing the fluidity of the interaction.

One challenge of proactive assistance is that users can be uncomfortable or even endangered if the robot makes unexpected motion. To mitigate this concern, the robot can communicate its intentions to the user explicitly. This could be done by having the robot show the user its plan directly on the physical environment, for example highlighting the part of a car door it plans to work on (Andersen et al., 2016b), or by showing its intended travel path in a virtual reality headset (Shu et al., 2018).

Proactive assistance enables more robust and general applications than reactive assistance does. However, the added sophistication in assistance requires additional complexity in the robot’s models and behavior, which is compounded by the need to act in varied environments to unexpected stimuli. In addition, a purely proactive system can be uncomfortable or dangerous if the user is not prepared for the robot’s actions. To mitigate some of these concerns, assistance systems can design some parts of the interaction as reactive and others as proactive. For example, the serving robot in Huang and Mutlu (2016) proactively moves closer to its estimate of the user’s most likely request, but it does not initiate the actual grasping process until it receives an explicit command.

6.3 Simultaneous

Simultaneous assistance exists between the previous two categories and includes shared control and collaborative robots. These systems generally function similarly to proactive assistance, but act at the same time as the user. These systems include shared autonomy systems (Javdani et al., 2015; Javdani et al., 2018; Losey et al., 2018), which fuse the user’s direct command with an autonomously generated command and arbitrate between the two according to some schema. It also includes tasks like carrying a table together (Nikolaidis et al., 2016; DelPreto and Rus, 2019), in which both the user and the robot must act independently for progress to be made.

Simultaneous assistance occurs often in collaborative assembly tasks. The goal and structure of a joint assembly task is often pre-specified, making it easy to determine a user’s goal. A robot in such a task can directly assist by, for example, lifting and holding heavy objects steady so that they can be worked on (Fischer et al., 2015; El Makrini et al., 2017). A robot can also assist by orienting a part to optimize construction, for example by following the images found in an assembly manual (Akkaladevi et al., 2016; Wang et al., 2020).

Simultaneous assistance often benefits from sophisticated communication strategies. For example, DelPreto and Rus (2019) designed a robot to sense electromyographic signals from a user to jointly manipulate a heavy object. A robot could also communicate back with the user, for example by changing its stiffness during a co-manipulation task in order to alert the user they should not move an object into a specific location (Bo et al., 2016). Similarly, a robot could provide the user with cues as to the next step during a complicated assembly task such as by pointing at the next item of interest (Admoni et al., 2016), providing a negative emotive feedback when a user completes an incorrect assembly step (Reyes et al., 2016; Rahman, 2019a) or display other emotive capabilities to signal task progress (Mok, 2016; Terzioglu et al., 2020).

Simultaneous assistive systems generally require tight collaboration between the user and the robot. The closeness of the collaboration requires the system to have a more complicated strategy for understanding user commands, since it is unlikely that the user will give precise commands while also accomplishing their task. However, these models can be more flexible than pure proactive systems: the robot can gain immediate feedback from the user about whether or not its action is correct, so it can recover from some model failures more quickly.

6.4 Implications

Determining when a robot should act has implications on the quality of a robot interaction. Reactive systems use more explicit control which enables more user agency, but it also increases the burden to complete a task. Proactive systems require more sophisticated models and sensing onboard the robot, but they can improve collaborative fluency while decreasing user burden. Systems that act in anticipation of explicit user commands may even be able to influence future user behavior in unforeseen ways, leading to questions about who is in control of setting the task goal (Newman et al., 2020). Proactive robots also generally lead to more robot agency, which introduces complex challenges such as safety and trust.

Preferences among when a robot chooses to take action may differ among users even within the same task domain. While one user may prefer a robot that requires less training and complication to operate, another might prefer to have more direct control over the robot to determine its behavior more precisely. If the user is paired with the system they least prefer, the interaction may cease to be assistive. In addition, an assistive system need not be completely proactive, reactive or simultaneous: the system can choose different timing and cueing strategies based on the particular part of the task under consideration. Choosing exactly when a robot executes its actions requires careful thought about the nature of the task, the capability of the robot, and the desires of the user.

7 Conclusion

In this paper, we describe an overall perspective on robotic systems that emphasizes their assistive intentions. With this perspective, we present three key design axes that compare assistive robotics research across domains: the relationships they develop with people, their action space, and their action timing. We explore these axes through a review of recent assistive robotics research, showing how assistive robots from across domains face similar challenges and make comparable decisions along these axes.

Much of the research discussed in this paper is specific to its task domain due to how the field has been organized and the difficulty of building abstractions. In this work, we propose some abstractions, and we hope that they will enable designers of assistive robots to find systems in other domains that share their problems and to draw deeper connections with them.

For each axis, we discuss design tradeoffs resulting from particular approaches. From among these axes, several themes emerge. Choices in the robot’s action space and timing can both affect a user’s sense of agency. Similarly, both the robot’s action space and relationship with the user impact the structure of the communication between the robot and the user, which alters the quality of the assistance. It is our hope that researchers will explore more themes that span these design axes and provide more structure to the development of assistive robots.

Finally, this work is intended to start a conversation about how to understand the specific challenges of assistive robotics within the general area of human-robot interaction. With this framework, we hope to encourage researchers to further explore the nature of assistance as a general concept and describe its inherent challenges. We do not claim that these axes are complete; rather, we present them as the beginning of a larger effort to develop general principles of assistive robotics.

Author Contributions

BAN, RMA, HA, and KK contributed to the conception and refinement of the main ideas of the paper. BAN developed the method for gathering papers for review. BAN read, selected, and organized the papers into the three critical axes. BAN wrote the first draft of the paper. BAN and RMA wrote the second draft of the paper, significantly reorganizing the first draft. RMA and BAN contributed to creating figures. All authors contributed to manuscript revision, read, and approved the submitted version.

Funding

This work was partially supported by the National Science Foundation grant NSF IIS-1943072 and the Tang Family Foundation Innovation Fund. This research was also supported in part by a gift from Uptake, via the Uptake Machine Learning Fund for Social Good at Carnegie Mellon.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

We thank Alex London for discussions about the definitions and ethical implications of robotic assistance. We also thank Jessica Benner, Sarah Young, and Melanie Gainey for helping with the process of paper collection.

References

Acharya, U., Kunde, S., Hall, L., Duncan, B. A., and Bradley, J. M. (2018). “Inference of User Qualities in Shared Control,” in 2018 IEEE International Conference on Robotics and Automation (ICRA) (Brisbane, QLD, Australia: IEEE Computer Soc), 588–595. doi:10.1109/icra.2018.8461193

CrossRef Full Text | Google Scholar

Ackovska, N., Kirandziska, V., Tanevska, A., Bozinovska, L., and Bozinovski, A. (2017). “Robot - Assisted Therapy for Autistic Children,” in SoutheastCon 2017 (Concord, NC, USA: IEEE). doi:10.1109/secon.2017.7925401

CrossRef Full Text | Google Scholar

Admoni, H., Weng, T., Hayes, B., and Scassellati, B. (2016). “Robot Nonverbal Behavior Improves Task Performance in Difficult Collaborations,” in 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (Christchurch, New Zealand: IEEE Press), 51–58. doi:10.1109/hri.2016.7451733

CrossRef Full Text | Google Scholar

Agrigoroaie, R., Ferland, F., and Tapus, A. (2016). “The Enrichme Project: Lessons Learnt from a First Interaction with the Elderly,” in Social Robotics, (ICSR 2016). Editors A. Agah, J. Cabibihan, A. Howard, M. Salichs, and H. He (Berlin: Springer-Verlag), Vol. 9979, 735–745. doi:10.1007/978-3-319-47437-3_72

CrossRef Full Text | Google Scholar

Akkaladevi, S. C., Plasch, M., Pichler, A., and Rinner, B. (2016). “Human Robot Collaboration to Reach a Common Goal in an Assembly Process,” in Proceedings of the Eighth European Starting Ai Researcher Symposium (Stairs 2016). Vol. 284 of Frontiers in Artificial Intelligence and Applications. Editors D. Pearce, and H. Pinto (Amsterdam: IOS Press), 3–14.

Google Scholar

Alimardani, M., Kemmeren, L., Okumura, K., and Hiraki, K. (2020). “Robot-assisted Mindfulness Practice: Analysis of Neurophysiological Responses and Affective State Change,” in 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE), 683–689. doi:10.1109/ro-man47096.2020.9223428

CrossRef Full Text | Google Scholar

Andersen, R. S., Bogh, S., Moeslund, T. B., and Madsen, O. (2016a). “Task Space Hri for Cooperative Mobile Robots in Fit-Out Operations Inside Ship Superstructures,” in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (IEEE), 880–887. doi:10.1109/roman.2016.7745223

CrossRef Full Text | Google Scholar

Andersen, R. S., Madsen, O., Moeslund, T. B., and Amor, H. B. (2016b). “Projecting Robot Intentions into Human Environments,” in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (IEEE), 294–301. doi:10.1109/roman.2016.7745145

CrossRef Full Text | Google Scholar

Araiza-Illan, D., and Clemente, A. d. S. B. (2018). “Dynamic Regions to Enhance Safety in Human-Robot Interactions,” in 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA) (IEEE), 693698.

Google Scholar

Aronson, R. M., Santini, T., Kuebler, T. C., Kasneci, E., Srinivasa, S., and Admoni, H. (2018). “Eye-hand Behavior in Human-Robot Shared Manipulation,” in HRI ‘18: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Assoc Computing Machinery) (IEEE), 4–13. doi:10.1145/3171221.3171287

CrossRef Full Text | Google Scholar

Baklouti, M., Monacelli, E., Guitteny, V., and Couvet, S. (2008). “Intelligent Assistive Exoskeleton with Vision Based Interface,” in Smart Homes and Health Telematics. Vol. 5120 of Lecture Notes in Computer Science. Editors S. Helal, S. Mitra, J. Wong, C. Chang, and M. Mokhtari (Berlin: Springer-Verlag), 123–135.

Google Scholar

Baraglia, J., Cakmak, M., Nagai, Y., Rao, R., and Asada, M. (2016). “Initiative in Robot Assistance During Collaborative Task Execution,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (Christchurch, New Zealand: IEEE Press), 67–74. doi:10.1109/hri.2016.7451735

CrossRef Full Text | Google Scholar

Bartl, A., Bosch, S., Brandt, M., Dittrich, M., and Lugrin, B. (2016). “The Influence of a Social Robot’s Persona on How it Is Perceived and Accepted by Elderly Users,” in Social Robotics, (ICSR 2016). Vol. 9979 of Lecture Notes in Artificial Intelligence. Editors A. Agah, J. Cabibihan, A. Howard, M. Salichs, and H. He (Berlin: Springer-Verlag), 681–691. doi:10.1007/978-3-319-47437-3_67

CrossRef Full Text | Google Scholar

Bernardo, B., Alves-Oliveira, P., Santos, M. G., Melo, F. S., and Paiva, A. (2016). “An Interactive Tangram Game for Children with Autism,” in Intelligent Virtual Agents, IVA 2016. Vol. 10011 of Lecture Notes in Artificial Intelligence. Editors D. Traum, W. Swartout, P. Khooshabeh, S. Kopp, S. Scherer, and A. Leuski (Berlin: Springer-Verlag), 500–504. doi:10.1007/978-3-319-47665-0_63

CrossRef Full Text | Google Scholar

Bharatharaj, J., Huang, L., Al-Jumaily, A., Elara, M. R., and Krageloh, C. (2017). Investigating the Effects of Robot-Assisted Therapy Among Children with Autism Spectrum Disorder Using Bio-Markers. IOP Conf. Series-Materials Sci. Eng. 234, 012017. doi:10.1088/1757-899x/234/1/012017

CrossRef Full Text | Google Scholar

Birnbaum, G. E., Mizrahi, M., Hoffman, G., Reis, H. T., Finkel, E. J., and Sass, O. (2016). “Machines as a Source of Consolation: Robot Responsiveness Increases Human Approach Behavior and Desire for Companionship,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (Christchurch, New Zealand: IEEE Press), 165–171. doi:10.1109/hri.2016.7451748

CrossRef Full Text | Google Scholar

Bo, H., Mohan, D. M., Azhar, M., Sreekanth, K., and Campolo, D. (2016). “Human-robot Collaboration for Tooling Path Guidance,” in 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BIOROB) (Singapore: IEEE), 1340–1345. doi:10.1109/biorob.2016.7523818

CrossRef Full Text | Google Scholar

Bodden, C., Rakita, D., Mutlu, B., and Gleicher, M. (2016). “Evaluating Intent-Expressive Robot Arm Motion,” in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (IEEE) (IEEE RO-MAN), 658663. doi:10.1109/roman.2016.7745188

CrossRef Full Text | Google Scholar

Bonani, M., Oliveira, R., Correia, F., Rodrigues, A., Guerreiro, T., and Paiva, A. (2018). “What My Eyes Can’t See, a Robot Can Show Me: Exploring the Collaboration between Blind People and Robots,” in ASSETS ’18: Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (New York, NY: Association for Computing Machinery), 15–27.

Google Scholar

Breazeal, C., Brooks, A., Chilongo, D., Gray, J., Hoffman, G., Kidd, C., et al. (2004). “Working Collaboratively with Humanoid Robots,” in 2004 4th IEEE/RAS International Conference on Humanoid Robots, Vols 1 and 2, Proceedings (IEEE), 253–272.

Google Scholar

Briggs, P., Scheutz, M., and Tickle-Degnen, L. (2015). “Are Robots Ready for Administering Health Status Surveys? First Results from an Hri Study with Subjects with Parkinson’s Disease,” in Proceedings of the 2015 ACM/IEEE International Conference on Human-robot Interaction (HRI’15) (Assoc Computing Machinery) (IEEE), 327–334.

Google Scholar

Broehl, C., Nelles, J., Brandl, C., Mertens, A., and Schlick, C. M. (2016). “Tam Reloaded: A Technology Acceptance Model for Human-Robot Cooperation in Production Systems,” in HCI International 2016 - Posters’ Extended Abstracts, PT I. Vol. 617 of Communications in Computer and Information Science. Editor C. Stephanidis (New York City: Springer International Publishing AG), 97–103.

Google Scholar

Brose, S. W., Weber, D. J., Salatin, B. A., Grindle, G. G., Wang, H., Vazquez, J. J., et al. (2010). The Role of Assistive Robotics in the Lives of Persons with Disability. Am. J. Phys. Med. Rehabil. 89, 509–521. doi:10.1097/phm.0b013e3181cf569b

CrossRef Full Text | Google Scholar

Bui, H.-D., and Chong, N. Y. (2018). “An Integrated Approach to Human-Robot-Smart Environment Interaction Interface for Ambient Assisted Living,” in 2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO) (IEEE), 32–37. doi:10.1109/arso.2018.8625821

CrossRef Full Text | Google Scholar

Cacace, J., Caccavale, R., Finzi, A., and Lippiello, V. (2019a). “Variable Admittance Control Based on Virtual Fixtures for Human-Robot Co-manipulation,” in 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC) (IEEE), 1569–1574.

CrossRef Full Text | Google Scholar

Cacace, J., Finzi, A., and Lippiello, V. (2019b). “Enhancing Shared Control via Contact Force Classification in Human-Robot Cooperative Task Execution,” in Human Friendly Robotics. Vol. 7 of Springer Proceedings in Advanced Robotics. Editors F. Ficuciello, F. Ruggiero, and A. Finzi (New York City: Springer International Publishing AG), 167–179. doi:10.1007/978-3-319-89327-3_13

CrossRef Full Text | Google Scholar

Cakmak, M., Srinivasa, S. S., Lee, M. K., Forlizzi, J., and Kiesler, S. (2011). “Human Preferences for Robot-Human Hand-Over Configurations,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE). doi:10.1109/iros.2011.6094735

CrossRef Full Text | Google Scholar

Canal, G., Alenya, G., and Torras, C. (2016). “Personalization Framework for Adaptive Robotic Feeding Assistance,” in Social Robotics, (ICSR 2016). Vol. 9979 of Lecture Notes in Artificial Intelligence. Editors A. Agah, J. Cabibihan, A. Howard, M. Salichs, and H. He (Berlin: Springer-Verlag), 22–31. doi:10.1007/978-3-319-47437-3_3

CrossRef Full Text | Google Scholar

Canal, G., Pignat, E., Alenya, G., Calinon, S., and Torras, C. (2018). “Joining High-Level Symbolic Planning with Low-Level Motion Primitives in Adaptive Hri: Application to Dressing Assistance,” in 2018 IEEE International Conference on Robotics and Automation (ICRA) (Brisbane, QLD, Australia: IEEE), 3273–3278. doi:10.1109/icra.2018.8460606

CrossRef Full Text | Google Scholar

Carmichael, M. G., and Liu, D. (2013). “Admittance Control Scheme for Implementing Model-Based Assistance-As-Needed on a Robot,” in 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (IEEE), 870–873. doi:10.1109/embc.2013.6609639

CrossRef Full Text | Google Scholar

Cha, E., Fitter, N. T., Kim, Y., Fong, T., and Mataric, M. J. (2018). “Effects of Robot Sound on Auditory Localization in Human-Robot Collaboration,” in HRI ‘18: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Assoc Computing Machinery) (IEEE), 434–442. doi:10.1145/3171221.3171285

CrossRef Full Text | Google Scholar

Chang, W.-L., and Sabanovic, S. (2015). “Interaction Expands Function: Social Shaping of the Therapeutic Robot Paro in a Nursing Home,” in Proceedings of Tth 2015 ACM/IEEE International Conference on Human-Robot Interaction (HRI’15) (Assoc Computing Machinery) (IEEE), 343–350.

Google Scholar

Chen, K., Plaza-Leiva, V., Min, B.-C., Steinfeld, A., and Dias, M. B. (2016). “Navcue: Context Immersive Navigation Assistance for Blind Travelers,” in Eleventh ACM/IEEE International Conference on Human Robot Interaction (HRI’16) (Assoc Computing Machinery) (IEEE), 559. doi:10.1109/hri.2016.7451855

CrossRef Full Text | Google Scholar

Chen, T. L., and Kemp, C. C. (2010). “Lead Me by the Hand: Evaluation of a Direct Physical Interface for Nursing Assistant Robots,” in Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2010) (IEEE), 367–374. doi:10.1109/hri.2010.5453162

CrossRef Full Text | Google Scholar

Choi, H., Lee, J., and Kong, K. (2018). “A Human-Robot Interface System for Walkon Suit : a Powered Exoskeleton for Complete Paraplegics,” in IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society (IEEE), 5057–5061. doi:10.1109/iecon.2018.8592884

CrossRef Full Text | Google Scholar

Christodoulou, P., May Reid, A. A., Pnevmatikos, D., Rioja del Rio, C., and Fachantidis, N. (2020). Students Participate and Evaluate the Design and Development of a Social Robot 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) IEEE, 739–744. doi:10.1109/ro-man47096.2020.9223490

CrossRef Full Text | Google Scholar

Cremer, S., Doelling, K., Lundberg, C. L., McNair, M., Shin, J., and Popa, D. (2016). “Application Requirements for Robotic Nursing Assistants in Hospital Environments,” in Sensors For Next-generation Robotics III. Vol. 9859 of Proceedings of SPIE. Editors D. Popa, and M. Wijesundara (Bellingham, Wa: Spie-Int Soc Optical Engineering). doi:10.1117/12.2229241

CrossRef Full Text | Google Scholar

Cserteg, T., Erdos, G., and Horvath, G. (2018). “Assisted Assembly Process by Gesture Controlled Robots,” in 51st CIRP Conference on Manufacturing Systems. Vol. 72 of Procedia CIRP. Editor L. Wang (Amsterdam: Elsevier Science Bv), 51–56. doi:10.1016/j.procir.2018.03.028

CrossRef Full Text | Google Scholar

Davison, D. P., Wijnen, F. M., Charisi, V., van der Meij, J., Evers, V., and Reidsma, D. (2020). “Working with a Social Robot in School: A Long-Term Real-World Unsupervised Deployment,” in Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ‘20) (Assoc Computing Machinery) (IEEE), 63–72.

Google Scholar

De Luca, A., and Flacco, F. (2012). “Integrated Control for Phri: Collision Avoidance, Detection, Reaction and Collaboration,” in 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics And Biomechatronics (BIOROB). Editors J. Desai, L. Jay, and L. Zollo (Rome, Italy: IEEE), 288295. doi:10.1109/biorob.2012.6290917

CrossRef Full Text | Google Scholar

DelPreto, J., and Rus, D. (2019). “Sharing the Load: Human-Robot Team Lifting Using Muscle Activity,” in 2019 International Conference on Robotics and Automation (ICRA). Editors A. Howard, K. Althoefer, F. Arai, F. Arrichiello, B. Caputo, J. Castellanoset al. (IEEE), 7906–7912. doi:10.1109/icra.2019.8794414

CrossRef Full Text | Google Scholar

Devin, S., and Alami, R. (2016). “An Implemented Theory of Mind to Improve Human-Robot Shared Plans Execution,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (IEEE Press), 319–326. doi:10.1109/hri.2016.7451768

CrossRef Full Text | Google Scholar

Dino, F., Zandie, R., Ahdollahi, H., Schoeder, S., and Mahoor, M. H. (2019). “Delivering Cognitive Behavioral Therapy Using a Conversational Social Robot,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE), 2089–2095. doi:10.1109/iros40897.2019.8968576

CrossRef Full Text | Google Scholar

Dometios, A. C., Papageorgiou, X. S., Arvanitakis, A., Tzafestas, C. S., and Maragos, P. (2017). “Real-Time End-Effector Motion Behavior Planning Approach Using On-Line Point-Cloud Data Towards a User Adaptive Assistive Bath Robot,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Editors A. Bicchi, and A. Okamura (IEEE), 50315036. doi:10.1109/iros.2017.8206387

CrossRef Full Text | Google Scholar

Doroodgar, B., Ficocelli, M., Mobedi, B., and Nejat, G. (2010). “The Search for Survivors: Cooperative Human-Robot Interaction in Search and Rescue Environments Using Semi-Autonomous Robots,” in 2010 IEEE International Conference on Robotics and Automation (ICRA) (IEEE), 2858–2863. doi:10.1109/robot.2010.5509530

CrossRef Full Text | Google Scholar

Dragan, A. D., Lee, K. C. T., and Srinivasa, S. S. (2013). “Legibility and Predictability of Robot Motion,” in 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 301–308. doi:10.1109/HRI.2013.6483603

CrossRef Full Text | Google Scholar

El Makrini, I., Merckaert, K., Lefeber, D., and Vanderborght, B. (2017). “Design of a Collaborative Architecture for Human-Robot Assembly Tasks,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Editors A. Bicchi, and A. Okamura (IEEE), 1624–1629. doi:10.1109/iros.2017.8205971

CrossRef Full Text | Google Scholar

Fan, J., Beuscher, L., Newhouse, P. A., Mion, L. C., and Sarkar, N. (2016). “A Robotic Coach Architecture for Multi-User Human-Robot Interaction (Ramu) with the Elderly and Cognitively Impaired,” in 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (IEEE), 445–450. doi:10.1109/roman.2016.7745157

CrossRef Full Text | Google Scholar

Faria, M., Silva, R., Alves-Oliveira, P., Melo, F. S., and Paiva, A. (2017). ““me and You Together” Movement Impact in Multi-User Collaboration Tasks,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Editors A. Bicchi, and A. Okamura (IEEE), 2793–2798. doi:10.1109/iros.2017.8206109

CrossRef Full Text | Google Scholar

Farrell, L., Strawser, P., Hambuchen, K., Baker, W., and Badger, J. (2017). “Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Editors A. Bicchi, and A. Okamura (IEEE), 37973802. doi:10.1109/iros.2017.8206229

CrossRef Full Text | Google Scholar

Ferreira, B. Q., Karipidou, K., Rosa, F., Petisca, S., Alves-Oliveira, P., and Paiva, A. (2016). “A Study on Trust in a Robotic Suitcase,” Social Robotics, (ICSR 2016) Vol. 9979 of Lecture Notes in Artificial Intelligence. Editors A. Agah, J. Cabibihan, A. Howard, M. Salichs, and H. He (Berlin: Springer-Verlag), 179–189. doi:10.1007/978-3-319-47437-3_18

CrossRef Full Text | Google Scholar

Fischer, K., Jensen, L. C., Kirstein, F., Stabinger, S., Erkent, O., Shukla, D., et al. (2015). “The Effects of Social Gaze in Human-Robot Collaborative Assembly,” Social Robotics (ICSR 2015) Vol. 9388 of Lecture Notes in Artificial Intelligence Editors A. Tapus, E. Andre, J. Martin, F. Ferland, and M. Ammi (Berlin: Springer-Verlag), 204–213. doi:10.1007/978-3-319-25554-5_21

CrossRef Full Text | Google Scholar

Fong, T., Thorpe, C., and Baur, C. (2003). “Collaboration, Dialogue, and Human-Robot Interaction,” Robotics Research Vol. 6 of Springer Tracts in Advanced Robotics Editors R. Jarvis, and A. Zelinsky (Berlin: Springer-Verlag), 255–266.

Google Scholar

Fuglerud, K. S., and Solheim, I. (2018). “The Use of Social Robots for Supporting Language Training of Children,” in Transforming Our World Through Design, Diversity And Education. Vol. 256 of Studies in Health Technology and Informatics. Editors G. Craddock, C. Doran, L. McNutt, and D. Rice (Amsterdam, Netherlands: IOS PRESS), 401–408.

PubMed Abstract | Google Scholar

Gamborino, E., and Fu, L.-C. (2018). “Interactive Reinforcement Learning Based Assistive Robot for the Emotional Support of Children,” in 2018 18th International Conference on Control, Automation and Systems (ICCAS) (IEEE), 708–713.

Google Scholar

Gao, X., Gong, R., Zhao, Y., Wang, S., Shu, T., and Zhu, S.-C. (2020). “Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks,” in 2020 29th Editors R. Jarvis and A. Zelinsky (Berlin: Springer-Verlag), 255 -266. IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE), 1119–1126. doi:10.1109/ro-man47096.2020.9223595

CrossRef Full Text | Google Scholar

Gao, Y., Sibirtseva, E., Castellano, G., and Kragic, D. (2019). “Fast Adaptation with Meta-Reinforcement Learning for Trust Modelling in Human-Robot Interaction,” in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE), 305–312. doi:10.1109/iros40897.2019.8967924

CrossRef Full Text | Google Scholar

Goeruer, O. C., Rosman, B., Sivrikaya, F., and Albayrak, S. (2018). “Social Cobots: Anticipatory Decision-Making for Collaborative Robots Incorporating Unexpected Human Behaviors,” in HRI ‘18: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Assoc Computing Machinery (IEEE), 398–406.

Google Scholar

Goldau, F. F., Shastha, T. K., Kyrarini, M., and Graeser, A. (2019). “Autonomous Multi-Sensory Robotic Assistant for a Drinking Task,” in 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR) (IEEE), 210–216. doi:10.1109/icorr.2019.8779521

PubMed Abstract | CrossRef Full Text | Google Scholar

Goodrich, M. A., Colton, M., Brinton, B., and Fujiki, M. (2011). “A Case for Low-Dose Robotics in Autism Therapy,” in Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2011) (IEEE), 143–144. doi:10.1145/1957656.1957702

CrossRef Full Text | Google Scholar

Grames, E. M., Stillman, A. N., Tingley, M. W., and Elphick, C. S. (2019). An Automated Approach to Identifying Search Terms for Systematic Reviews Using Keyword Co‐Occurrence Networks. Methods Ecol. Evol. 10, 1645–1654. doi:10.1111/2041-210X.13268

CrossRef Full Text | Google Scholar

Grigore, E. C., Eder, K., Pipe, A. G., Melhuish, C., and Leonards, U. (2013). “Joint Action Understanding Improves Robot-To-Human Object Handover,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Editor N. Amato (IEEE), 4622–4629. doi:10.1109/iros.2013.6697021

CrossRef Full Text | Google Scholar

Guneysu, A., and Arnrich, B. (2017). “Socially Assistive Child-Robot Interaction in Physical Exercise Coaching,” in 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). Editors A. Howard, K. Suzuki, and L. Zollo (IEEE), 670–675. doi:10.1109/roman.2017.8172375

CrossRef Full Text | Google Scholar

Hawkins, K. P., Bansal, S., Vo, N. N., and Bobick, A. F. (2014). “Anticipating Human Actions for Collaboration in the Presence of Task and Sensor Uncertainty,” in 2014 IEEE International Conference on Robotics and Automation (ICRA) (IEEE), 2215–2222. doi:10.1109/icra.2014.6907165

CrossRef Full Text | Google Scholar

Hawkins, K. P., Vo, N., Bansal, S., and Bobick, A. F. (2013). “Probabilistic Human Action Prediction and Wait-Sensitive Planning for Responsive Human-Robot Collaboration,” in 2013 13th IEEE-RAS International Conference on Humanoid Robots (HUMANOIDS) (IEEE), 499–506. doi:10.1109/humanoids.2013.7030020

CrossRef Full Text | Google Scholar

Hayne, R., Luo, R., and Berenson, D. (2016). “Considering Avoidance and Consistency in Motion Planning for Human-Robot Manipulation in a Shared Workspace,” in 2016 IEEE International Conference on Robotics and Automation (ICRA). Editors A. Okamura, A. Menciassi, A. Ude, D. Burschka, D. Lee, F. Arrichielloet al. (IEEE), 3948–3954. doi:10.1109/icra.2016.7487584

CrossRef Full Text | Google Scholar

Hemminghaus, J., and Kopp, S. (2017). “Towards Adaptive Social Behavior Generation for Assistive Robots Using Reinforcement Learning,” in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI’17) (IEEE), 332–340. doi:10.1145/2909824.3020217

CrossRef Full Text | Google Scholar

Herlant, L. V., Holladay, R. M., and Srinivasa, S. S. (2016). “Assistive Teleoperation of Robot Arms via Automatic Time-Optimal Mode Switching,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (IEEE Press), 35–42. doi:10.1109/hri.2016.7451731

PubMed Abstract | CrossRef Full Text | Google Scholar

Hietanen, A., Changizi, A., Lanz, M., Kamarainen, J., Ganguly, P., Pieters, R., et al. (2019). “Proof of Concept of a Projection-Based Safety System for Human-Robot Collaborative Engine Assembly,” in 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE). doi:10.1109/ro-man46459.2019.8956446

CrossRef Full Text | Google Scholar

Hoffman, G., Birnbaum, G. E., Vanunu, K., Sass, O., and Reis, H. T. (2014). “Robot Responsiveness to Human Disclosure Affects Social Impression and Appeal,” in HRI’14: Proceedings of the 2014 ACM/IEEE IInternational Conference on Human-robot Interaction (IEEE), 1–8. doi:10.1145/2559636.2559660

CrossRef Full Text | Google Scholar

Horiguchi, Y., Sawaragi, T., and Akashi, G. (2000). “Naturalistic Human-Robot Collaboration Based upon Mixed-Initiative Interactions in Teleoperating Environment,” in SMC 2000 Conference Proceedings: 2000 IEEE International Conference on Systems, Man & Cybernetics, Vol. 1-5 (IEEE), 876–881.

Google Scholar

Huang, C.-M., and Mutlu, B. (2016). “Anticipatory Robot Control for Efficient Human-Robot Collaboration,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (IEEE Press), 83–90. doi:10.1109/hri.2016.7451737

CrossRef Full Text | Google Scholar

Iossifidis, I., and Schoner, G. (2004). “Autonomous Reaching and Obstacle Avoidance with the Anthropomorphic Arm of a Robotic Assistant Using the Attractor Dynamics Approach,” in 2004 IEEE International Conference on Robotics and Automation, Vols 1-5, Proceedings (IEEE), 4295–4300. doi:10.1109/robot.2004.1302393

CrossRef Full Text | Google Scholar

Iqbal, T., and Riek, L. D. (2019). “Human-Robot Teaming: Approaches from Joint Action and Dynamical Systems,” in Humanoid Robotics: A Reference (Netherlands: Springer Netherlands), 2293–2312. doi:10.1007/978-94-007-6046-2_137

CrossRef Full Text | Google Scholar

Itadera, S., Kobayashi, T., Nakanishi, J., Aoyama, T., and Hasegawa, Y. (2019). “Impedance Control Based Assistive Mobility Aid through Online Classification of User’s State,” in 2019 IEEE/SICE International Symposium On System Integration (SII) (IEEE), 243–248. doi:10.1109/sii.2019.8700458

CrossRef Full Text | Google Scholar

Ivanova, E., Krause, A., Schaelicke, M., Schellhardt, F., Jankowski, N., Achner, J., et al. (2017). “Let’s Do This Together: Bi-manu-interact, a Novel Device for Studying Human Haptic Interactive Behavior,” in 2017 International Conference on Rehabilitation Robotics (ICORR). Editors F. Amirabdollahian, E. Burdet, and L. Masia (IEEE), 708–713. doi:10.1109/icorr.2017.8009331

PubMed Abstract | CrossRef Full Text | Google Scholar

Jarrassé, N., Charalambous, T., and Burdet, E. (2012). A Framework to Describe, Analyze and Generate Interactive Motor Behaviors. PLoS One 7, e49945. doi:10.1371/journal.pone.0049945

PubMed Abstract | CrossRef Full Text | Google Scholar

Jarrasse, N., Paik, J., Pasqui, V., and Morel, G. (2008). “How Can Human Motion Prediction Increase Transparency?,” in 2008 IEEE International Conference on Robotics AND Automation, Vols 1-9 (IEEE), 2134–2139.

Google Scholar

Javdani, S., Admoni, H., Pellegrinelli, S., Srinivasa, S. S., and Bagnell, J. A. (2018). Shared Autonomy via Hindsight Optimization for Teleoperation and Teaming. Int. J. Robotics Res. 37, 717–742. doi:10.1177/0278364918776060

CrossRef Full Text | Google Scholar

Javdani, S., Srinivasa, S. S., and Bagnell, J. A. (2015). Shared Autonomy via Hindsight Optimization. Robot Sci. Syst. 2015.

PubMed Abstract | CrossRef Full Text | Google Scholar

Jensen, L. C., Fischer, K., Suvei, S.-D., and Bodenhagen, L. (2017). “Timing of Multimodal Robot Behaviors During Human-Robot Collaboration,” in 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). Editors A. Howard, K. Suzuki, and L. Zollo (IEEE), 1061–1066. doi:10.1109/roman.2017.8172435

CrossRef Full Text | Google Scholar

Kanero, J., Franko, I., Oranc, C., Ulusahin, O., Koskulu, S., Adiguzel, Z., et al. (2018). “Who Can Benefit from Robots? Effects of Individual Differences in Robot-Assisted Language Learning,” in 2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EPIROB) (IEEE), 212–217.

Google Scholar

Kennedy, J., Baxter, P., Senft, E., and Belpaeme, T. (2016). “Social Robot Tutoring for Child Second Language Learning,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (IEEE Press), 231–238. doi:10.1109/hri.2016.7451757

CrossRef Full Text | Google Scholar

Kim, M.-G., Oosterling, I., Lourens, T., Staal, W., Buitelaar, J., Glennon, J., et al. (2014). “Designing Robot-Assisted Pivotal Response Training in Game Activity for Children with Autism,” in 2014 IEEE International Conference on Systems, MAN and Cybernetics (SMC) (IEEE), 1101–1106. doi:10.1109/smc.2014.6974061

CrossRef Full Text | Google Scholar

Knepper, R. A., Mavrogiannis, C. I., Proft, J., and Liang, C. (2017). “Implicit Communication in a Joint Action,” in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI’17) (IEEE), 283–292. doi:10.1145/2909824.3020226

CrossRef Full Text | Google Scholar

Kontogiorgos, D., Pereira, A., Sahindal, B., van Waveren, S., and Gustafson, J. (2020). “Behavioural Responses to Robot Conversational Failures,” in Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ‘20) (Assoc Computing Machinery) (IEEE), 53–62. doi:10.1145/3319502.3374782

CrossRef Full Text | Google Scholar

Koppula, H. S., Jain, A., and Saxena, A. (2016). “Anticipatory Planning for Human-Robot Teams,” in Experimental Robotics. Vol. 109 of Springer Tracts in Advanced Robotics. Editors M. Hsieh, O. Khatib, and V. Kumar (New York City: Springer International Publishing AG), 453–470. doi:10.1007/978-3-319-23778-7_30

CrossRef Full Text | Google Scholar

Koskinopoulou, M., Piperakis, S., and Frahanias, P. (2016). “Learning from Demonstration Facilitates Human-Robot Collaborative Task Execution,” in Eleventh ACM/IEEE IInternational Conference on Human Robot Interaction (HRI’16) (Assoc Computing Machinery) (IEEE), 59–66. doi:10.1109/hri.2016.7451734

CrossRef Full Text | Google Scholar

Koustoumpardis, P. N., Chatzilygeroudis, K. I., Synodinos, A. I., and Aspragathos, N. A. (2016). “Human Robot Collaboration for Folding Fabrics Based on Force/rgb-D Feedback,” in Advances in Robot Design and Intelligent Control. Vol. 371 of Advances in Intelligent Systems and Computing. Editor T. Borangiu (Berlin: Springer-Verlag), 235–243. doi:10.1007/978-3-319-21290-6_24

CrossRef Full Text | Google Scholar

Kratz, S., and Ferriera, F. R. (2016). “Immersed Remotely: Evaluating the Use of Head Mounted Devices for Remote Collaboration in Robotic Telepresence,” in 2016 25th IEEE Internationl Symposium on Robot and Human Interactive Communication (RO-MAN) (IEEE), 638–645. doi:10.1109/roman.2016.7745185

CrossRef Full Text | Google Scholar

Kwon, W. Y., and Suh, I. H. (2012). “A Temporal Bayesian Network with Application to Design of a Proactive Robotic Assistant,” in 2012 IEEE Internationl Conference on Robotics and Automation (ICRA) (IEEE), 3685–3690. doi:10.1109/icra.2012.6224673

CrossRef Full Text | Google Scholar

Kyrkjebo, E., Laastad, M. J., and Stavdahl, O. (2018). “Feasibility of the Ur5 Industrial Robot for Robotic Rehabilitation of the Upper Limbs after Stroke,” in 2018 IEEE/RSJ Internationl Conference on Intelligent Robots and Systems (IROS). Editors A. Maciejewski, A. Okamura, A. Bicchi, C. Stachniss, D. Song, D. Leeet al. (IEEE), 6124–6129. doi:10.1109/iros.2018.8594413

CrossRef Full Text | Google Scholar

Lambrecht, J., and Nimpsch, S. (2019). “Human Prediction for the Natural Instruction of Handovers in Human Robot Collaboration,” in 2019 28th IEEE Internationl Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE). doi:10.1109/ro-man46459.2019.8956379

CrossRef Full Text | Google Scholar

Law, E., Cai, V., Liu, Q. F., Sasy, S., Goh, J., Blidaru, A., et al. (2017). “A Wizard-Of-Oz Study of Curiosity in Human-Robot Interaction,” in 2017 26th IEEE Internationl Symposium on Robot and Human Interactive Communication (RO-MAN). Editors A. Howard, K. Suzuki, and L. Zollo (IEEE), 607–614. doi:10.1109/roman.2017.8172365

CrossRef Full Text | Google Scholar

Lee, H., and Hogan, N. (2016). “Essential Considerations for Design and Control of Human-Interactive Robots,” in 2016 IEEE Internationl Conference on Robotics and Automation (ICRA). Editors A. Okamura, A. Menciassi, A. Ude, D. Burschka, D. Lee, F. Arrichielloet al. (IEEE), 3069–3074. doi:10.1109/icra.2016.7487472

CrossRef Full Text | Google Scholar

Leite, I., McCoy, M., Lohani, M., Ullman, D., Salomons, N., Stokes, C., et al. (2015). “Emotional Storytelling in the Classroom: Individual versus Group Interaction Between Children and Robots,” in Proceedings of The 2015 ACM/IEEE Internationl Conference on Human-Robot Interaction (HRI’15) (Assoc Computing Machinery) (IEEE), 75–82.

Google Scholar

Li, Y., Tee, K. P., Chan, W. L., Yan, R., Chua, Y., and Limbu, D. K. (2015). “Role Adaptation of Human and Robot in Collaborative Tasks,” in 2015 IEEE Internationl Conference on Robotics and Automation (ICRA) (IEEE Computer Soc) (IEEE), 5602–5607. doi:10.1109/icra.2015.7139983

CrossRef Full Text | Google Scholar

Li, Z., and Hollis, R. (2019). “Toward a Ballbot for Physically Leading People: A Human-Centered Approach,” in 2019 IEEE/RSJ Internationl Conference on Intelligent Robots and Systems (IROS) (IEEE), 4827–4833. doi:10.1109/iros40897.2019.8968546

CrossRef Full Text | Google Scholar

Lim, D., Kim, W., Lee, H., Kim, H., Shin, K., Park, T., et al. (2015). “Development of a Lower Extremity Exoskeleton Robot with a Quasi-Anthropomorphic Design Approach for Load Carriage,” in 2015 IEEE/RSJ Internationl Conference on Intelligent Robots and Systems (IROS) (IEEE), 5345–5350. doi:10.1109/iros.2015.7354132

CrossRef Full Text | Google Scholar

Lin, T.-C., Krishnan, A. U., and Li, Z. (2019). “Physical Fatigue Analysis of Assistive Robot Teleoperation via Whole-Body Motion Mapping,” in 2019 IEEE/RSJ Internationl Conference on Intelligent Robots and Systems (IROS) (IEEE), 2240–2245. doi:10.1109/iros40897.2019.8968544

CrossRef Full Text | Google Scholar

Liu, H., Wang, Y., Ji, W., and Wang, L. (2018). “A Context-Aware Safety System for Human-Robot Collaboration,” in 28th Internationl Conference on Flexible Automation and Intelligent Manufacturing (FAIM2018): Global Integration Of Intelligent Manufacturing and Smart Industry for Good of Humanity. Vol. 17 of Procedia Manufacturing. Editors D. Sormaz, G. Suer, and F. Chen (Elsevier Science Bv), 238–245. doi:10.1016/j.promfg.2018.10.042

CrossRef Full Text | Google Scholar

Losey, D. P., McDonald, C. G., Battaglia, E., and O’Malley, M. K. (2018). A Review of Intent Detection, Arbitration, and Communication Aspects of Shared Control for Physical Human–Robot Interaction. Appl. Mech. Rev. 70. doi:10.1115/1.4039145

CrossRef Full Text | Google Scholar

Louie, W.-Y. G., Vaquero, T., Nejat, G., and Beck, J. C. (2014). “An Autonomous Assistive Robot for Planning, Scheduling and Facilitating Multi-User Activities,” in 2014 IEEE Internationl Conference on Robotics and Automation (ICRA) (IEEE), 5292–5298. doi:10.1109/icra.2014.6907637

CrossRef Full Text | Google Scholar

Lubold, N., Walker, E., and Pon-Barry, H. (2016). “Effects of Voice-Adaptation and Social Dialogue on Perceptions of a Robotic Learning Companion,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (IEEE Press), 255–262. doi:10.1109/hri.2016.7451760

CrossRef Full Text | Google Scholar

Luo, J., Yang, C., Burdet, E., and Li, Y. (2020). “Adaptive Impedance Control with Trajectory Adaptation for Minimizing Interaction Force,” in 2020 29th IEEE Internationl Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE), 1360–1365. doi:10.1109/ro-man47096.2020.9223572

CrossRef Full Text | Google Scholar

Machino, T., Iwaki, S., Kawata, H., Yanagihara, Y., Nanjo, Y., and Shimokura, K.-i. (2006). “Remote-Collaboration System Using mobile Robot with Camera and Projector,” in 2006 IEEE Internationl Conference on Robotics and Automation (ICRA), Vols 1-10 (IEEE), 4063+.

Google Scholar

Maeda, G., Ewerton, M., Lioutikov, R., Ben Amor, H., Peters, J., and Neumann, G. (2014). “Learning Interaction for Collaborative Tasks with Probabilistic Movement Primitives,” in 2014 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids) (IEEE), 527–534. doi:10.1109/humanoids.2014.7041413

CrossRef Full Text | Google Scholar

Malik, N. A., Yussof, H., and Hanapiah, F. A. (2014). Development of Imitation Learning through Physical Therapy Using a Humanoid Robot. Proced. Comput. Sci. 42, 191–197. doi:10.1016/j.procs.2014.11.051

CrossRef Full Text | Google Scholar

Malik, N. A., Yussof, H., and Hanapiah, F. A. (2017). Interactive Scenario Development of Robot-Assisted Therapy for Cerebral Palsy: A Face Validation Survey. Proced. Comput. Sci. 105, 322–327. doi:10.1016/j.procs.2017.01.229

CrossRef Full Text | Google Scholar

Mariotti, E., Magrini, E., and De Luca, A. (2019). “Admittance Control for Human-Robot Interaction Using an Industrial Robot Equipped with a F/t Sensor,” in 2019 Internationl Conference on Robotics and Automation (ICRA). Editors A. Howard, K. Althoefer, F. Arai, F. Arrichiello, B. Caputo, J. Castellanoset al. (IEEE), 6130–6136. doi:10.1109/icra.2019.8793657

CrossRef Full Text | Google Scholar

Martelaro, N., Nneji, V. C., Ju, W., and Hinds, P. (2016). “Tell Me More: Designing Hri to Encourage More Trust, Disclosure, and Companionship,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (IEEE Press), 181–188. doi:10.1109/hri.2016.7451864

CrossRef Full Text | Google Scholar

Matarić, M. J., and Scassellati, B. (2016). Socially Assistive Robotics. Berlin, Germany: Springer Handbook of Robotics, 1973–1994.

Google Scholar

Mataric, M., Tapus, A., Winstein, C., and Eriksson, J. (2009). “Socially Assistive Robotics for Stroke and Mild Tbi Rehabilitation,” in Advanced Technologies in Rehabilitation: Empowering Cognitive, Physical, Social and Communicative Skills Through Virtual Reality, Robots, Wearable Systems and Brain-computer Interfaces. Vol. 145 of Studies in Health Technology and Informatics. Editors A. Gaggioli, E. Keshner, P. Weiss, and G. Riva (Amsterdam, Netherlands: IOS Press), 249–262.

Google Scholar

Mertens, A., Reiser, U., Brenken, B., Luedtke, M., Haegele, M., Verl, A., et al. (2011). “Assistive Robots in Eldercare and Daily Living: Automation of Individual Services for Senior Citizens,” in Intelligent Robotics and Applications, PT I: ICIRA 2011. Vol. 7101 of Lecture Notes in Artificial Intelligence. Editors S. Jeschke, H. Liu, and D. Schilberg (Berlin: Springer-Verlag), 542+. doi:10.1007/978-3-642-25486-4_54

CrossRef Full Text | Google Scholar

Meyer, S., and Fricke, C. (2017). “Robotic Companions in Stroke Therapy: A User Study on the Efficacy of Assistive Robotics Among 30 Patients in Neurological Rehabilitation,” in 2017 26th IEEE Internationl Symposium on Robot and Human Interactive Communication (RO-MAN). Editors A. Howard, K. Suzuki, and L. Zollo (IEEE), 135–142. doi:10.1109/roman.2017.8172292

CrossRef Full Text | Google Scholar

Milliez, G., Lallement, R., Fiore, M., and Alami, R. (2016). “Using Human Knowledge Awareness to Adapt Collaborative Plan Generation, Explanation and Monitoring,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (IEEE Press), 43–50. doi:10.1109/hri.2016.7451732

CrossRef Full Text | Google Scholar

Miura, J., Kadekawa, S., Chikaarashi, K., and Sugiyama, J. (2016). “Human-Robot Collaborative Remote Object Search,” in Intelligent Autonomous Systems 13. Vol. 302 of Advances in Intelligent Systems and Computing. Editors E. Menegatti, N. Michael, K. Berns, and H. Yamaguchi (Berlin: Springer-Verlag), 979–991. doi:10.1007/978-3-319-08338-4_71

CrossRef Full Text | Google Scholar

Mizanoor Rahman, S. M. (2020). “Grasp Rehabilitation of Stroke Patients through Object Manipulation with an Intelligent Power Assist Robotic System,” in Proceedings of the Future Technologies Conference (FTC) 2019, Vol. 2. Vol. 1070 of Advances in Intelligent Systems and Computing. Editors K. Arai, R. Bhatia, and S. Kapoor (New York City: Springer International Publishing Ag), 244–259. doi:10.1007/978-3-030-32523-7_16

CrossRef Full Text | Google Scholar

Mok, B. (2016). “Effects of Proactivity and Expressivity on Collaboration with Interactive Robotic Drawers,” in Eleventh ACM/IEEE Internationl Conference on Human Robot Interaction (HRI’16) (Assoc Computing Machinery) (IEEE), 633–634. doi:10.1109/hri.2016.7451892

CrossRef Full Text | Google Scholar

Monaikul, N., Abbasi, B., Rysbek, Z., Di Eugenio, B., and Zefran, M. (2020). “Role Switching in Task-Oriented Multimodal Human-Robot Collaboration,” in 2020 29th IEEE Internationl Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE), 1150–1156. doi:10.1109/ro-man47096.2020.9223461

CrossRef Full Text | Google Scholar

Moon, H.-S., Kim, W., Han, S., and Seo, J. (2018). “Observation of Human Trajectory in Response to Haptic Feedback from Mobile Robot,” in 2018 18th Internationl Conference on Control, Automation and Systems (ICCAS) (IEEE), 1530–1534.

Google Scholar

Mucchiani, C., Sharma, S., Johnson, M., Sefcik, J., Vivio, N., Huang, J., et al. (2017). “Evaluating Older Adults’ Interaction with a Mobile Assistive Robot,” in 2017 IEEE/RSJ Internationl Conference on Intelligent Robots and Systems (IROS). Editors A. Bicchi, and A. Okamura (IEEE), 840–847. doi:10.1109/iros.2017.8202246

CrossRef Full Text | Google Scholar

Mueller, S. L., Schroeder, S., Jeschke, S., and Richert, A. (2017). “Design of a Robotic Workmate,” in Digital Human Modeling: Applications in Health, Safety, Ergonomics, and Risk Management: Ergonomics and Design. Vol. 10286 of Lecture Notes in Computer Science. Editor V. Duffy (New York City: Springer International Publishing Ag), 447–456.

Google Scholar

Muratore, L., Laurenzi, A., and Tsagarakis, N. G. (2019). “A Self-Modulated Impedance Multimodal Interaction Framework for Human-Robot Collaboration,” in 2019 Internationl Conference on Robotics And Automation (ICRA). Editors A. Howard, K. Althoefer, F. Arai, F. Arrichiello, B. Caputo, J. Castellanoset al. (IEEE), 4998–5004. doi:10.1109/icra.2019.8794168

CrossRef Full Text | Google Scholar

Mussakhojayeva, S., Kalidolda, N., and Sandygulova, A. (2017). “Adaptive Strategies for Multi-Party Interactions with Robots in Public Spaces,” in Social Robotics, ICSR 2017. Vol. 10652 of Lecture Notes in Artificial Intelligence. Editors A. Kheddar, E. Yoshida, S. Ge, K. Suzuki, J. Cabibihan, F. Eysselet al. (New York City: Springer International Publishing Ag), 749–758. doi:10.1007/978-3-319-70022-9_74

CrossRef Full Text | Google Scholar

Muthusamy, R., Indave, J. M., Muthusamy, P. K., Hasan, E. F., Zweiri, Y., Kyrki, V., et al. (2019). “Investigation and Design of Robotic Assistance Control System for Cooperative Manipulation,” in 2019 9th IEEE Annual Internationl Conference on Cyber Technology In Automation, Control, and Intelligent Systems (IEEE-Cyber 2019) (IEEE), 889–895. doi:10.1109/cyber46603.2019.9066573

CrossRef Full Text | Google Scholar

Nabipour, M., and Moosavian, S. A. A. (2018). “Dynamics Modeling and Performance Analysis of Robowalk,” in 2018 6th RSI Internationl Conference on Robotics and Mechatronics (ICROM 2018) (IEEE), 445–450. doi:10.1109/icrom.2018.8657593

CrossRef Full Text | Google Scholar

Nakabayashi, K., Iwasaki, Y., Takahashi, S., and Iwata, H. (2018). “Experimental Evaluation of Cooperativeness and Collision Safety of a Wearable Robot Arm,” in 2018 27th IEEE Internationl Symposium on Robot and Human Interactive Communication (IEEE RO-MAN 2018). Editors J. Cabibihan, F. Mastrogiovanni, A. Pandey, S. Rossi, and M. Staffa (IEEE), 1026–1031. doi:10.1109/roman.2018.8525620

CrossRef Full Text | Google Scholar

Nemlekar, H., Dutia, D., and Li, Z. (2019). “Object Transfer Point Estimation for Fluent Human-Robot Handovers,” in 2019 Internationl Conference on Robotics and Automation (ICRA). Editors A. Howard, K. Althoefer, F. Arai, F. Arrichiello, B. Caputo, J. Castellanoset al. (IEEE), 2627–2633. doi:10.1109/icra.2019.8794008

CrossRef Full Text | Google Scholar

Newman, B. A., Aronson, R. M., Srinivasa, S. S., Kitani, K., and Admoni, H. (2018). Harmonic: A Multimodal Dataset of Assistive Human-Robot Collaboration. CoRR abs/1807.11154.

Google Scholar

Newman, B. A., Biswas, A., Ahuja, S., Girdhar, S., Kitani, K. K., and Admoni, H. (2020b). “Examining the Effects of Anticipatory Robot Assistance on Human Decision Making,” in Social Robotics. Editors A. R. Wagner, D. Feil-Seifer, K. S. Haring, S. Rossi, T. Williams, H. Heet al. (New York City: Springer International Publishing), 590–603. doi:10.1007/978-3-030-62056-1_49

CrossRef Full Text | Google Scholar

Newman, B., Carlberg, K., and Desai, R. (2020a). Optimal Assistance for Object-Rearrangement Tasks in Augmented Reality.

Google Scholar

Nguyen, H., and Kemp, C. C. (2008). “Bio-Inspired Assistive Robotics: Service Dogs as a Model for Human-Robot Interaction and Mobile Manipulation,” in 2008 2nd Ieee Ras & Embs Internationl Conference on Biomedical Robotics and Biomechatronics (BIOROB 2008), Vols 1 and 2 (IEEE), 542–549. doi:10.1109/biorob.2008.4762910

CrossRef Full Text | Google Scholar

Nguyen, P. D. H., Bottarel, F., Pattacini, U., Hoffmann, M., Natale, L., and Metta, G. (2018). “Merging Physical and Social Interaction for Effective Human-Robot Collaboration,” in 2018 IEEE-RAS 18th Internationl Conference on Humanoid Robots (Humanoids). Editor T. Asfour (IEEE), 710–717. doi:10.1109/humanoids.2018.8625030

CrossRef Full Text | Google Scholar

Nie, G., Zheng, Z., Johnson, J., Swanson, A. R., Weitlauf, A. S., Warren, Z. E., et al. (2018). “Predicting Response to Joint Attention Performance in Human-Human Interaction Based on Human-Robot Interaction for Young Children with Autism Spectrum Disorder,” in 2018 27th IEEE Internationl Symposium on Robot and Human Interactive Communication (IEEE RO-MAN 2018). Editors J. Cabibihan, F. Mastrogiovanni, A. Pandey, S. Rossi, and M. Staffa (IEEE), 1069–1074. doi:10.1109/roman.2018.8525634

CrossRef Full Text | Google Scholar

Nikolaidis, S., Kuznetsov, A., Hsu, D., and Srinivasa, S. (2016). “Formalizing Human-Robot Mutual Adaptation: A Bounded Memory Model,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (IEEE Press), 75–82. doi:10.1109/hri.2016.7451736

PubMed Abstract | CrossRef Full Text | Google Scholar

Noda, T., Furukawa, J.-i., Teramae, T., Hyon, S.-H., and Morimoto, J. (2013). “An Electromyogram Based Force Control Coordinated in Assistive Interaction,” in 2013 IEEE Internationl Conference on Robotics and Automation (ICRA) (IEEE), 2657–2662. doi:10.1109/icra.2013.6630942

CrossRef Full Text | Google Scholar

Ono, H., Koike, K., Morita, T., and Yamaguchi, T. (2019). “Ontologies-Based Pupil Robot Interaction with Group Discussion,” in Knowledge-based and Intelligent Information & Engineering Systems (KES 2019). Vol. 159 of Procedia Computer Science. Editors I. Rudas, C. Janos, C. Toro, J. Botzheim, R. Howlett, and L. Jain (Amsterdam: Elsevier), 2071–2080. doi:10.1016/j.procs.2019.09.380

CrossRef Full Text | Google Scholar

Papageorgiou, X. S., Chalvatzaki, G., Dometios, A. C., and Tzafestas, C. S. (2019). “Human-Centered Service Robotic Systems for Assisted Living,” in Advances in Service and Industrial Robotics, Raad 2018. Vol. 67 of Mechanisms and Machine Science. Editors N. Aspragathos, P. Koustoumpardis, and V. Moulianitis (New York City: Springer International Publishing Ag), 132–140. doi:10.1007/978-3-030-00232-9_14

CrossRef Full Text | Google Scholar

Parlitz, C., Hägele, M., Klein, P., Seifert, J., and Dautenhahn, K. (2008). Care-o-bot 3 — Rationale for Human-Robot Interaction Design.

Google Scholar

Peters, R., Broekens, J., Li, K., and Neerincx, M. A. (2019). “Robots Expressing Dominance: Effects of Behaviours and Modulation,” in 2019 8th Internationl Conference on Affective Computing and Intelligent Interaction (ACII) (IEEE). doi:10.1109/acii.2019.8925500

CrossRef Full Text | Google Scholar

Pham, T. X. N., Hayashi, K., Becker-Asano, C., Lacher, S., and Mizuuchi, I. (2017). “Evaluating the Usability and Users’ Acceptance of a Kitchen Assistant Robot in Household Environment,” in 2017 26th IEEE Internationl Symposium on Robot and Human Interactive Communication (RO-MAN). Editors A. Howard, K. Suzuki, and L. Zollo (IEEE), 987–992. doi:10.1109/roman.2017.8172423

CrossRef Full Text | Google Scholar

Polishuk, A., and Verner, I. (2018). “An Elementary Science Class with a Robot Teacher,” in Robotics in Education: Latest Results and Developments. Vol. 630 of Advances in Intelligent Systems and Computing. Editors W. Lepuschitz, M. Merdan, G. Koppensteiner, R. Balogh, and D. Obdrzalek (New York City: Springer International Publishing Ag), 263–273. doi:10.1007/978-3-319-62875-2_24

CrossRef Full Text | Google Scholar

Pripfl, J., Koertner, T., Batko-Klein, D., Hebesberger, D., Weninger, M., Gisinger, C., et al. (2016). “Results of a Real World Trial with a Mobile Social Service Robot for Older Adults,” in Eleventh ACM/IEEE Internationl Conference on Human Robot Interaction (HRI’16) (Assoc Computing Machinery) (IEEE), 497–498. doi:10.1109/hri.2016.7451824

CrossRef Full Text | Google Scholar

R Core Team (2017). R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.

Google Scholar

Racca, M., Kyrki, V., and Cakmak, M. (2020). “Interactive Tuning of Robot Program Parameters via Expected Divergence Maximization,” in Proceedings of the 2020 ACM/IEEE Internationl Conference on Human-Robot Interaction (HRI ‘20) (Assoc Computing Machinery) (IEEE), 629–638. doi:10.1145/3319502.3374784

CrossRef Full Text | Google Scholar

Rahman, S. M. M. (2019a). “Bioinspired Dynamic Affect-Based Motion Control of a Humanoid Robot to Collaborate with Human in Manufacturing,” in 2019 12th Internationl Conference on Human System Interaction (HSI) (IEEE), 76–81. doi:10.1109/hsi47298.2019.8942609

CrossRef Full Text | Google Scholar

Rahman, S. M. M. (2019b). “Human Features-Based Variable Admittance Control for Improving Hri and Performance in Power-Assisted Heavy Object Manipulation,” in 2019 12th Internationl Conference on Human System Interaction (HSI) (IEEE), 87–92. doi:10.1109/hsi47298.2019.8942628

CrossRef Full Text | Google Scholar

Ramachandran, A., Litoiu, A., and Scassellati, B. (2016). “Shaping Productive Help-Seeking Behavior During Robot-Child Tutoring Interactions,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (IEEE Press), 247–254. doi:10.1109/hri.2016.7451759

CrossRef Full Text | Google Scholar

Reyes, M., Meza, I., and Pineda, L. A. (2016). “The Positive Effect of Negative Feedback in Hri Using a Facial Expression Robot,” in Cultural Robotics, CR 2015. Vol. 9549 of Lecture Notes in Computer Science. Editors J. Koh, B. Dunstan, D. SilveraTawil, and M. Velonaki (New York City: Springer International Publishing Ag), 44–54. doi:10.1007/978-3-319-42945-8_4

CrossRef Full Text | Google Scholar

Rhim, J., Cheung, A., Pham, D., Bae, S., Zhang, Z., Townsend, T., et al. (2019). “Investigating Positive Psychology Principles in Affective Robotics,” in 2019 8th Internationl Conference on AFfective Computing and Intelligent Interaction (ACII) (IEEE). doi:10.1109/acii.2019.8925475

CrossRef Full Text | Google Scholar

Riether, N., Hegel, F., Wrede, B., and Horstmann, G. (2012). “Social Facilitation with Social Robots?,” in HRI’12: Proceedings of the Seventh Annual ACM/IEEE Internationl Conference on Human-Robot Interaction (Assoc Computing Machinery) (IEEE), 41–47.

Google Scholar

Robinette, P., Li, W., Allen, R., Howard, A. M., and Wagner, A. R. (2016). “Overtrust of Robots in Emergency Evacuation Scenarios,” in HRI ’16: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (IEEE Press), 101–108. doi:10.1109/hri.2016.7451740

CrossRef Full Text | Google Scholar

Rosenberg-Kima, R., Koren, Y., Yachini, M., and Gordon, G. (2019). “Human-Robot-Collaboration (Hrc): Social Robots as Teaching Assistants for Training Activities in Small Groups,” in HRI ‘19: 2019 14th ACM/IEEE Internationl Conference on Human-Robot Interaction (IEEE), 522–523. doi:10.1109/hri.2019.8673103

CrossRef Full Text | Google Scholar

Rosenstrauch, M. J., Pannen, T. J., and Krueger, J. (2018). “Human Robot Collaboration - Using Kinect V2 for Iso/ts 15066 Speed and Separation Monitoring,” in 7th Cirp Conference on Assembly Technologies and Systems (CATS 2018). Vol. 76 of Procedia CIRP. Editors S. Wang, and Z. Wu (Elsevier Science Bv), 183–186. doi:10.1016/j.procir.2018.01.026

CrossRef Full Text | Google Scholar

Rossi, A., Dautenhahn, K., Koay, K. L., and Walters, M. L. (2020). “How Social Robots Influence People’s Trust in Critical Situations,” in 2020 29th IEEE Internationl Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE), 1020–1025. doi:10.1109/ro-man47096.2020.9223471

CrossRef Full Text | Google Scholar

Sabanovic, S., Bennett, C. C., Chang, W.-L., and Huber, L. (2013). “Paro Robot Affects Diverse Interaction Modalities in Group Sensory Therapy for Older Adults with Dementia,” in 2013 IEEE 13th Internationl Conference on Rehabilitation Robotics (ICORR) (IEEE). doi:10.1109/icorr.2013.6650427

CrossRef Full Text | Google Scholar

Salichs, E., Fernández-Rodicio, E., Castillo, J. C., Castro-González, Á., Malfaz, M., and Salichs, M. Á. (2018). “A Social Robot Assisting in Cognitive Stimulation Therapy,” in Advances in Practical Applications of Agents, Multi-Agent Systems, and Complexity: The Paams Collection. Vol. 10978 of Lecture Notes in Artificial Intelligence. Editors Y. Demazeau, B. An, J. Bajo, and A. Fernandez Caballero (New York City: Springer International Publishing Ag), 344–347. doi:10.1007/978-3-319-94580-4_35

CrossRef Full Text | Google Scholar

Sandygulova, A., Johal, W., Zhexenova, Z., Tleubayev, B., Zhanatkyzy, A., Turarova, A., et al. (2020). “Cowriting Kazakh: Learning a New Script with a Robot,” in Proceedings of the 2020 ACM/IEEE Internationl Conference on Human-Robot Interaction (HRI ‘20) (Assoc Computing Machinery) (IEEE), 113–120.

Google Scholar

Savur, C., Kumar, S., and Sahin, F. (2019). “A Framework for Monitoring Human Physiological Response During Human Robot Collaborative Task,” in 2019 IEEE Internationl Conference on Systems, Man and Cybernetics (SMC) (IEEE), 385–390. doi:10.1109/smc.2019.8914593

CrossRef Full Text | Google Scholar

Schmidtler, J., and Bengler, K. (2016). “Size-Weight Illusion in Human-Robot Collaboration,” in 2016 25th IEEE Internationl Symposium on Robot and Human Interactive Communication (RO-MAN) (IEEE), 874–879. doi:10.1109/roman.2016.7745222

CrossRef Full Text | Google Scholar

Schmidtler, J., Koerber, M., and Bengler, K. (2016). “A Trouble Shared is a Trouble Halved - Usability Measures for Human-Robot Collaboration,” in 2016 IEEE Internationl Conference on Systems, Man, and Cybernetics (SMC) (IEEE), 217–222. doi:10.1109/smc.2016.7844244

CrossRef Full Text | Google Scholar

Schneider, S., and Kummert, F. (2016). “Motivational Effects of Acknowledging Feedback from a Socially Assistive Robot,” in Social Robotics, (ICSR 2016). Vol. 9979 of Lecture Notes in Artificial Intelligence. Editors A. Agah, J. Cabibihan, A. Howard, M. Salichs, and H. He (Berlin: Springer-Verlag), 870–879. doi:10.1007/978-3-319-47437-3_85

CrossRef Full Text | Google Scholar

Sebo, S. S., Dong, L. L., Chang, N., and Scassellati, B. (2020). “Strategies for the Inclusion of Human Members within Human-Robot Teams,” in Proceedings of the 2020 ACM/IEEE Internationl Conference on Human-Robot Interaction (HRI ‘20) (Assoc Computing Machinery) (IEEE), 309–317. doi:10.1145/3319502.3374808

CrossRef Full Text | Google Scholar

Sebo, S. S., Traeger, M., Jung, M., and Scassellati, B. (2018). “The Ripple Effects of Vulnerability: The Effects of a Robot’s Vulnerable Behavior on Trust in Human-Robot Teams,” in Hri ‘18: Proceedings of the 2018 ACM/IEEE Internationl Conference on Human-Robot Interaction (Assoc Computing Machinery) (IEEE), 178–186.

Google Scholar

Shamaei, K., Che, Y., Murali, A., Sen, S., Patil, S., Goldberg, K., et al. (2015). “A Paced Shared-Control Teleoperated Architecture for Supervised Automation of Multilateral Surgical Tasks,” in 2015 IEEE/RSJ Internationl Conference on Intelligent Robots and Systems (IROS) (IEEE), 1434–1439. doi:10.1109/iros.2015.7353556

CrossRef Full Text | Google Scholar

Shamsuddin, S., Zulkifli, W. Z., Hwee, L. T., and Yussof, H. (2017). “Animal Robot as Augmentative Strategy to Elevate Mood: A Preliminary Study for Post-Stroke Depression,” in Interactive Collaborative Robotics (ICR 2017). Vol. 10459 of Lecture Notes in Artificial Intelligence. Editors A. Ronzhin, G. Rigoll, and R. Meshcheryakov (New York City: Springer International Publishing Ag), 209–218. doi:10.1007/978-3-319-66471-2_23

CrossRef Full Text | Google Scholar

Shao, M., Alves, S. F. D. R., Ismail, O., Zhang, X., Nejat, G., and Benhabib, B. (2019). “You Are Doing Great! Only One Rep Left: An Affect-Aware Social Robot for Exercising,” in 2019 IEEE Internationl Conference on Systems, Man and Cybernetics (SMC) (IEEE), 3811–3817. doi:10.1109/smc.2019.8914198

CrossRef Full Text | Google Scholar

Sharifara, A., Ramesh Babu, A., Rajavenkatanarayanan, A., Collander, C., and Makedon, F. (2018). “A Robot-Based Cognitive Assessment Model Based on Visual Working Memory and Attention Level,” Universal Access in Human-Computer Interaction: Methods, Technologies, and Users, UAHCI 2018, PT IVol. 10907 of Lecture Notes in Computer Science. Editors M. Antona, and C. Stephanidis (New York City: Springer International Publishing Ag), 583–597. doi:10.1007/978-3-319-92049-8_43

CrossRef Full Text | Google Scholar

Shayan, A. M., Sarmadi, A., Pirastehzad, A., Moradi, H., and Soleiman, P. (2016). “Roboparrot 2.0: A Multi-Purpose Social Robot,” in 2016 4th RSI Internationl Conference on Robotics and Mechatronics (ICROM) (IEEE), 422–427.

CrossRef Full Text | Google Scholar

Shibata, T., Mitsui, T., Wada, K., Touda, A., Kumasaka, T., Tagami, K., et al. (2001). “Mental Commit Robot and its Application to Therapy of Children,” in 2001 IEEE/ASME Internationl Conference on Advanced Intelligent Mechatronics Proceedings, Vols I and II (IEEE), 1053–1058.

Google Scholar

Shu, B., Sziebig, G., and Pieska, S. (2018). “Human-Robot Collaboration: Task Sharing through Virtual Reality,” in IECOn 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society (IEEE), 6040–6044. doi:10.1109/iecon.2018.8591102

CrossRef Full Text | Google Scholar

Sierra, S. D., Jimenez, M. F., Munera, M. C., Frizera-Neto, A., and Cifuentes, C. A. (2019). “Remote-Operated Multimodal Interface for Therapists During Walker-Assisted Gait Rehabilitation: A Preliminary Assessment,” in HRI ‘19: 2019 14th ACM/IEEE Internationl Conference on Human-Robot Interaction (IEEE), 528–529. doi:10.1109/hri.2019.8673099

CrossRef Full Text | Google Scholar

Sirithunge, H. P. C., Muthugala, M. A. V. J., Jayasekara, A. G. B. P., and Chandima, D. P. (2018). “A Wizard of Oz Study of Human Interest towards Robot Initiated Human-Robot Interaction,” in 2018 27th IEEE Internationl Symposium on Robot and Human Interactive Communication (IEEE RO-MAN 2018). Editors J. Cabibihan, F. Mastrogiovanni, A. Pandey, S. Rossi, and M. Staffa (IEEE), 515–521. doi:10.1109/roman.2018.8525583

CrossRef Full Text | Google Scholar

Sirkin, D., Mok, B., Yang, S., and Ju, W. (2015). “Mechanical Ottoman: How Robotic Furniture Offers and Withdraws Support,” in 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 11–18.

Google Scholar

Srinivasa, S. S., Berenson, D., Cakmak, M., Collet, A., Dogar, M. R., Dragan, A. D., et al. (2012). Herb 2.0: Lessons Learned from Developing a Mobile Manipulator for the Home. Proc. IEEE 100, 2410–2428. doi:10.1109/jproc.2012.2200561

CrossRef Full Text | Google Scholar

Srinivasa, S. S., Ferguson, D., Helfrich, C. J., Berenson, D., Collet, A., Diankov, R., et al. (2010). Herb: a Home Exploring Robotic Butler. Auton. Robot. 28, 5–20. doi:10.1007/s10514-009-9160-9

CrossRef Full Text | Google Scholar

Stoll, B., Reig, S., He, L., Kaplan, I., Jung, M. F., and Fussell, S. R. (2018). ““Wait, Can You Move the Robot?”: Examining Telepresence Robot Use in Collaborative Teams,” in HRI ‘18: Proceedings of the 2018 ACM/IEEE Internationl Conference on Human-Robot Interaction (Assoc Computing Machinery) (IEEE), 14–22.

Google Scholar

Strohkorb, S., Fukuto, E., Warren, N., Taylor, C., Berry, B., and Seassellati, B. (2016). “Improving Human-Human Collaboration between Children with a Social Robot,” in 2016 25th IEEE Internationl Symposium on Robot and Human Interactive Communication (RO-MAN) (IEEE), 551–556. doi:10.1109/roman.2016.7745172

CrossRef Full Text | Google Scholar

Su, H., Sandoval, J., Makhdoomi, M., Ferrigno, G., and De Momi, E. (2018). “Safety-Enhanced Human-Robot Interaction Control of Redundant Robot for Teleoperated Minimally Invasive Surgery,” in 2018 IEEE Internationl Conference on Robotics and Automation (ICRA) (IEEE Computer SOC) (IEEE), 6611–6616. doi:10.1109/icra.2018.8463148

CrossRef Full Text | Google Scholar

Svarny, P., Tesar, M., Behrens, J. K., and Hoffmann, M. (2019). “Safe Physical Hri: Toward a Unified Treatment of Speed and Separation Monitoring Together with Power and Force Limiting,” in 2019 IEEE/RSJ Internationl Conference on Intelligent Robots and Systems (IROS) (IEEE), 7580–7587. doi:10.1109/iros40897.2019.8968463

CrossRef Full Text | Google Scholar

Tabrez, A., Agrawal, S., and Hayes, B. (2019). “Explanation-Based Reward Coaching to Improve Human Performance via Reinforcement Learning,” in HRI ‘19: 2019 14th ACM/IEEE Internationl Conference on Human-Robot Interaction (IEEE), 249–257. doi:10.1109/hri.2019.8673104

CrossRef Full Text | Google Scholar

Taheri, A. R., Alemi, M., Meghdari, A., PourEtemad, H. R., and Basiri, N. M. (2014). “Social Robots as Assistants for Autism Therapy in Iran: Research in Progress,” in 2014 Second RSI/ISM Internationl Conference on Robotics and Mechatronics (ICROM) (IEEE), 760–766. doi:10.1109/icrom.2014.6990995

CrossRef Full Text | Google Scholar

Tapus, A., Tapus, C., and Mataric, M. J. (2007). “Hands-off Therapist Robot Behavior Adaptation to User Personality for Post-Stroke Rehabilitation Therapy,” in Proceedings of the 2007 IEEE Internationl Conference on Robotics and Automation, Vols 1-10 (IEEE), 1547+. doi:10.1109/robot.2007.363544

CrossRef Full Text | Google Scholar

Terzioglu, Y., Mutlu, B., and Sahin, E. (2020). “Designing Social Cues for Collaborative Robots: The Role of Gaze and Breathing in Human-Robot Collaboration,” in Proceedings of the 2020 ACM/IEEE Internationl Conference on Human-Robot Interaction (HRI ‘20) (Assoc Computing Machinery) (IEEE), 343–357.

Google Scholar

Till, J., Bryson, C. E., Chung, S., Orekhov, A., and Rucker, D. C. (2015). “Efficient Computation of Multiple Coupled Cosserat Rod Models for Real-Time Simulation and Control of Parallel Continuum Manipulators,” in 2015 IEEE Internationl Conference on Robotics and Automation (ICRA) (IEEE Computer Soc), IEEE International Conference on Robotics and Automation ICRA, 5067–5074. doi:10.1109/icra.2015.7139904

CrossRef Full Text | Google Scholar

Torrey, C., Fussell, S. R., and Kiesler, S. (2013). “How a Robot Should Give Advice,” in Proceedings of the 8th ACM/IEEE Internationl Conference on Human-Robot Interaction (HRI 2013). Editors H. Kuzuoka, V. Evers, M. Imai, and J. Forlizzi (IEEE), 275–282. doi:10.1109/hri.2013.6483599

CrossRef Full Text | Google Scholar

Uluer, P., Kose, H., Oz, B. K., Aydinalev, T. C., and Barkana, D. E. (2020). “Towards an Affective Robot Companion for Audiology Rehabilitation: How Does Pepper Feel Today?,” in 2020 29th IEEE Internationl Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE), 567–572.

Google Scholar

Unhelkar, V. V., Li, S., and Shah, J. A. (2020). “Decision-making for Bidirectional Communication in Sequential Human-Robot Collaborative Tasks,” in Proceedings of the 2020 ACM/IEEE Internationl Conference on Human-Robot Interaction (HRI ‘20) (Assoc Computing Machinery) (IEEE), 329–341. doi:10.1145/3319502.3374779

CrossRef Full Text | Google Scholar

Unhelkar, V. V., Siu, H. C., and Shah, J. A. (2014). “Comparative Performance of Human and Mobile Robotic Assistants in Collaborative Fetch-and-Deliver Tasks,” in HRI’14: Proceedings of the 2014 ACM/IEEE Internationl Conference on Human-Robot Interaction (IEEE), 82–89. doi:10.1145/2559636.2559655

CrossRef Full Text | Google Scholar

van Minkelen, P., Gruson, C., van Hees, P., Willems, M., de Wit, J., Aarts, R., et al. (2020). “Using Self-Determination Theory in Social Robots to Increase Motivation in L2 Word Learning,” in Proceedings of the 2020 ACM/IEEE Internationl Conference on Human-Robot Interaction (HRI ‘20) (Assoc Computing Machinery) (IEEE), 369–377. doi:10.1145/3319502.3374828

CrossRef Full Text | Google Scholar

Varrasi, S., Di Nuovo, S., Conti, D., and Di Nuovo, A. (2019). “Social Robots as Psychometric Tools for Cognitive Assessment: A Pilot Test,” in Human Friendly Robotics. Vol. 7 of Springer Proceedings in Advanced Robotics. Editors F. Ficuciello, F. Ruggiero, and A. Finzi (New York City: Springer International Publishing Ag), 99–112. doi:10.1007/978-3-319-89327-3_8

CrossRef Full Text | Google Scholar

Vatsal, V., and Hoffman, G. (2018). “Design and Analysis of a Wearable Robotic Forearm,” in 2018 IEEE Internationl Conference on Robotics and Automation (ICRA) (IEEE Computer Soc) (IEEE), 5489–5496. doi:10.1109/icra.2018.8461212

CrossRef Full Text | Google Scholar

Vishwanath, A., Singh, A., Chua, Y. H. V., Dauwels, J., and Magnenat-Thalmann, N. (2019). “Humanoid Co-workers: How is it like to Work with a Robot?,” in 2019 28th IEEE Internationl Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE).

Google Scholar

Vu, D.-S., Allard, U. C., Gosselin, C., Routhier, F., Gosselin, B., and Campeau-Lecours, A. (2017). “Intuitive Adaptive Orientation Control of Assistive Robots for People Living with Upper Limb Disabilities,” in 2017 Internationl Conference on Rehabilitation Robotics (ICORR). Editors F. Amirabdollahian, E. Burdet, and L. Masia (IEEE), 795–800. doi:10.1109/icorr.2017.8009345

CrossRef Full Text | Google Scholar

Wandke, H. (2005). Assistance in Human–Machine Interaction: A Conceptual Framework and a Proposal for a Taxonomy. Theoret. Iss. Ergonom. Sci. 6, 129–155. doi:10.1080/1463922042000295669

CrossRef Full Text | Google Scholar

Wang, Y., Ajaykumar, G., and Huang, C.-M. (2020). “See What I See: Enabling User-Centric Robotic Assistance Using First-Person Demonstrations,” in Proceedings of the 2020 ACM/IEEE Internationl Conference on Human-Robot Interaction (HRI ‘20) (Assoc Computing Machinery) (IEEE), 639–648.

Google Scholar

Westlund, J. K., Gordon, G., Spaulding, S., Lee, J. J., Plummer, L., Martinez, M., et al. (2016). “Lessons from Teachers on Performing Hri Studies with Young Children in Schools,” in Eleventh ACM/IEEE Internationl Conference on Human Robot Interaction (HRI’16) (Assoc Computing Machinery) (IEEE), 383–390. doi:10.1109/hri.2016.7451776

CrossRef Full Text | Google Scholar

Wieser, I., Toprak, S., Grenzing, A., Hinz, T., Auddy, S., Karaoguz, E. C., et al. (2016). “Robotic Home Assistant with Memory Aid Functionality,” in KI 2016: Advances in Artificial Intelligence. Vol. 9904 of Lecture Notes in Artificial Intelligence. Editors G. Friedrich, M. Helmert, and F. Wotawa (Berlin: Springer-Verlag), 102–115.

CrossRef Full Text | Google Scholar

Winkle, K., and Bremner, P. (2017). “Investigating the Real World Impact of Emotion Portrayal through Robot Voice and Motion,” in 2017 26th IEEE Internationl Symposium on Robot and Human Interactive Communication (RO-MAN). Editors A. Howard, K. Suzuki, and L. Zollo (IEEE), 627–634. doi:10.1109/roman.2017.8172368

CrossRef Full Text | Google Scholar

Wood, L. J., Zaraki, A., Walters, M. L., Novanda, O., Robins, B., and Dautenhahn, K. (2017). “The Iterative Development of the Humanoid Robot Kaspar: An Assistive Robot for Children with Autism,” in Social Robotics, Icsr 2017. Vol. 10652 of Lecture Notes in Artificial Intelligence. Editors A. Kheddar, E. Yoshida, S. Ge, K. Suzuki, J. Cabibihan, F. Eysselet al. (New York City: Springer International Publishing Ag), 53–63. doi:10.1007/978-3-319-70022-9_6

CrossRef Full Text | Google Scholar

Wu, M., He, Y., and Liu, S. (2020). “Shared Impedance Control Based on Reinforcement Learning in a Human-Robot Collaboration Task,” in Advances in Service and Industrial Robotics. Vol. 980 of Advances in Intelligent Systems and Computing. Editors K. Berns, and D. Gorges (New York City: Springer International Publishing Ag), 95–103. doi:10.1007/978-3-030-19648-6_12

CrossRef Full Text | Google Scholar

Zafari, S., Schwaninger, I., Hirschmanner, M., Schmidbauer, C., Weiss, A., and Koeszegi, S. T. (2019). ““You are Doing So Great!” - the Effect of a Robot’s Interaction Style on Self-Efficacy in Hri,” in 2019 28th IEEE Internationl Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE). doi:10.1109/ro-man46459.2019.8956437

CrossRef Full Text | Google Scholar

Zhang, Z., Chen, Z., and Li, W. (2018). “Automating Robotic Furniture with a Collaborative Vision-Based Sensing Scheme,” in 2018 27th IEEE Internationl Symposium on Robot and Human Interactive Communication (IEEE RO-MAN 2018). Editors J. Cabibihan, F. Mastrogiovanni, A. Pandey, S. Rossi, and M. Staffa (IEEE), 719–725. doi:10.1109/roman.2018.8525783

CrossRef Full Text | Google Scholar

Zhao, F., Henrichs, C., and Mutlu, B. (2020). “Task Interdependence in Human-Robot Teaming,” in 2020 29th IEEE Internationl Conference on Robot and Human Interactive Communication (RO-MAN) (IEEE), 1143–1149. doi:10.1109/ro-man47096.2020.9223555

CrossRef Full Text | Google Scholar

Zhou, H., Yang, L., Lv, H., Yi, K., Yang, H., and Yang, G. (2019). “Development of a Synchronized Human-Robot-Virtuality Interaction System Using Cooperative Robot and Motion Capture Device,” in 2019 IEEE/ASME Internationl Conference on Advanced Intelligent Mechatronics (AIM) (IEEE), 329–334. doi:10.1109/aim.2019.8868447

CrossRef Full Text | Google Scholar

Zhu, H., Gabler, V., and Wollherr, D. (2017). “Legible Action Selection in Human-Robot Collaboration,” in 2017 26th IEEE Internationl Symposium on Robot and Human Interactive Communication (RO-MAN). Editors A. Howard, K. Suzuki, and L. Zollo (IEEE), 354–359. doi:10.1109/roman.2017.8172326

CrossRef Full Text | Google Scholar

Zignoli, A., Biral, F., Yokoyama, K., and Shimono, T. (2019). “Including a Musculoskeletal Model in the Control Loop of an Assistive Robot for the Design of Optimal Target Forces,” in 45th Annual Conference of the IEEE Industrial Electronics Society (Iecon 2019) (IEEE), 5394–5400. doi:10.1109/iecon.2019.8927125

CrossRef Full Text | Google Scholar

Keywords: human robot interaction, assistive robotics, socially assistive robotics, physically assistive robotics, collaborative robotics, rehabilitative robotics

Citation: Newman BA, Aronson RM, Kitani K and Admoni H (2022) Helping People Through Space and Time: Assistance as a Perspective on Human-Robot Interaction. Front. Robot. AI 8:720319. doi: 10.3389/frobt.2021.720319

Received: 04 June 2021; Accepted: 25 November 2021;
Published: 27 January 2022.

Edited by:

Séverin Lemaignan, Pal Robotics S. L., Spain

Reviewed by:

Cristina Urdiales, University of Malaga, Spain
Yomna Abdelrahman, Munich University of the Federal Armed Forces, Germany

Copyright © 2022 Newman, Aronson, Kitani and Admoni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Benjamin A. Newman, newmanba@cmu.edu

Download