About this Research Topic
Continued development in the fields of artificial intelligence, robotics, and virtual reality means that true human-machine teaming is a looming possibility. Machine agents of the future will be sophisticated teammates, able to contribute to team planning and strategy, and capable of executing complex tasks with minimal human oversight.
While these agents are likely to influence the nature of team tasks, team effectiveness and performance are not determined solely by the simple aggregate of separate member abilities and inputs. Team effectiveness depends upon the successful integration and coordination of individual efforts through team processes and teamwork. As such, the success of future human-machine teams will, in part, depend on machine team members’ successful engagement in teamwork. Considering that limited forms of human-machine teaming have already begun, research examining human-machine teaming is urgently needed.
In this Research Topic, we seek papers addressing how agent teammates affect teamwork factors and processes, including (but not limited to):
- Leadership. How is human leadership affected by the introduction of a machine teammate? Under what circumstances can a machine agent effectively lead a team? How are team dynamics affected if leadership roles transition between human and machine teammates based on contextual demands?
- Verbal communication. To what extent can naturalistic communication be achieved in human-machine interactions? What is necessary for this to happen? Is that goal necessary for all types of human-machine teamwork? Do communication limitations affect human perceptions of a machine teammate?
- Non-verbal communication. How important is it for a machine agent to be able to recognize and respond to non-verbal cues, such as posture, timing, and intonation, provided by teammates? What are the mechanisms by which this can be accomplished? How can these features be integrated to allow the agent to comprehend their meaning?
- Shared mental models. Shared mental models are critical in complex human-human teaming – can humans develop shared mental models with machine agents? Are these mental models truly “shared” across humans and machines? Can we demonstrate that shared human-machine mental models result in improved team performance?
- Trust. Can novel indices of trust in a machine agent, which are not reliant on task allocation (human vs. machine) and self-report measures, be developed? What additional research is needed to support development of more sophisticated human-machine teamwork models that include bi-directional interactions between trust and other critical team processes?
- Conflict resolution. As task-relevant conflicts between team members are likely to occur in complex, dynamic tasks where multiple solutions may be viable, how can conflicts between human and machine agents can be managed and successfully resolved? What are the potential long-term consequences of human-machine conflicts on team processes?
- Adaptability. What are the factors that allow teams to succeed in the face of changing circumstances, despite machine teammates’ limited abilities to adapt? Are those adaptations occurring at the individual-level or team-level, or both?
- Performance monitoring. Is the tendency for people to satisfice rather than optimize outcomes likely to influence strategy, planning, and execution (even subtly) to favor approaches that maximize machine agent capabilities and minimize machine agent deficiencies in human-machine teams? If so, what are the consequences of those decisions on factors such as teamwork and team outcomes?
This Research Topic, therefore, seeks papers that address the above issues, and others related to the theoretical, technical, and practice-oriented research into teamwork in human-machine teams, and their linkages with team behaviors and performance.
Keywords: Human-Machine Teaming, machine agents, team performance, teamwork
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.