Skip to main content

PERSPECTIVE article

Front. Psychol., 24 May 2021
Sec. Performance Science
This article is part of the Research Topic Teamwork in Human-Machine Teaming View all 9 articles

Improving Teamwork Competencies in Human-Machine Teams: Perspectives From Team Science

  • 1College of Business, The University of Alabama, Tuscaloosa, AL, United States
  • 2College of Computing and Informatics, Drexel University, Philadelphia, PA, United States
  • 3Soar Technology, Inc., Ann Arbor, MI, United States
  • 4School of Social Sciences, Rice University, Houston, TX, United States

In response to calls for research to improve human-machine teaming (HMT), we present a “perspective” paper that explores techniques from computer science that can enhance machine agents for human-machine teams. As part of this paper, we (1) summarize the state of the science on critical team competencies identified for effective HMT, (2) discuss technological gaps preventing machines from fully realizing these competencies, and (3) identify ways that emerging artificial intelligence (AI) capabilities may address these gaps and enhance performance in HMT. We extend beyond extant literature by incorporating recent technologies and techniques and describing their potential for contributing to the advancement of HMT.

Introduction

Human-machine teaming (HMT)1 is increasingly relevant to a variety of modern industries, domains, and work environments. Nearly a decade ago, Amazon added robots to their warehouse facilities to participate in stocking (The Future of Work, 2019). More recently, Google initiated a research program to improve human-machine collaboration (Knight, 2017). These examples and others (e.g., IBM, Facebook; see Davenport, 2018) have demonstrated that machine agents, or machines capable of perceiving and acting upon the world autonomously (Russell and Norvig, 2009), can improve human and organizational performance by providing opportunities for increased safety and productivity.

Effective HMT is contingent upon the success of complex interactions between human and machine agents, and between these agents and their environment (Stowers et al., 2017). However, it is difficult to create machine agents that have the advanced competencies (i.e., knowledge, skills, and abilities) necessary to support these complex interactions (Sukthankar, et al., 2012). Consequently, not all HMT results in heightened performance at the individual, team, or organizational level. In their respective literatures, teams researchers (e.g., Salas et al., 2009) and computer scientists (e.g., Klein et al., 2004; Ososky et al., 2012; Seeber et al., 2020) have identified competencies that are important for successful teaming, but efforts to identify promising new technologies in this area have been limited. Thus, there is a need to explore recent technological developments that may contribute to the advancement of HMT research and practice regarding effective human-machine collaboration.

In this perspective, we (1) briefly summarize the state of the science on critical team competencies identified for effective HMT, (2) highlight gaps preventing machines from fully realizing these competencies, and (3) identify emerging artificial intelligence (AI) capabilities that show promise for enhancing these competencies in machine agent teammates. Our goal is to show how HMT can integrate cutting edge advancements from computer science to improve capabilities of machines to function as teammates.

The Evolution of Human-Machine Teams

Psychologists and engineers have long explored the use of machines to augment and improve human task performance (Fitts, 1951; Dekker and Woods, 2002). In early work, machines operated as tools to facilitate taskwork by automating physical (e.g., product assembly) and cognitive (e.g., text generation) tasks. The goal of the machine was to improve the overall HMT performance (Dekker and Woods, 2002). Historically, the sociotechnical systems approach (Trist and Bamforth, 1951; Cherns, 1976) guided work design for HMT (Trist, 1981). According to this perspective, the human represents the social subsystem, and completes tasks using resources within the technical subsystem, which is represented by the machine (Eason, 2009).

As machines gained intelligence and the ability to adapt in their interactions with humans, researchers (e.g., Parasuraman and Riley, 1997) developed guidelines regarding the appropriate design and use of machines, including guidelines for their autonomy and adaptivity (e.g., Parasuraman et al., 2000). Broader frameworks describe the human, machine, and contextual inputs, and the resulting processes and states that define human-machine performance (Pina et al., 2008; Stowers et al., 2017). These frameworks also highlight the temporal nature of interaction between humans and machines.

In the last decade, the conversation has shifted from machines as tools to machines as teammates (Phillips et al., 2011; Seeber et al., 2020). The introduction of machines as a component of the social – rather than solely the technical – system has resulted in new design-related challenges (Sukthankar et al., 2012). For example, once machines attain a certain level of intelligence, humans tend to judge machines in much the same way they do their fellow humans, seeking human likeness where it may not exist (Nass et al., 1995; Groom and Nass, 2007). From the HMT perspective, this has been referred to as “teammate-likeness,” where the human perceives the machine as possessing agency, altruism, task-interdependence, relationship building, sophisticated communication, and shared mental models (SMM; Wynne and Lyons, 2018). Teammate-likeness is contingent on factors such as trust and ability (Schaefer et al., 2016) and may also be contingent on machine cues that imply emotional intelligence (e.g., empathy, perspective taking; Salovey and Mayer, 1990). Given that machines still lack the capacity for true emotional intelligence and other socio-emotional competencies that are on par with humans (e.g., Picard et al., 2001; Erol et al., 2020), creating technologies and techniques that allow machines to live up to this perceived teammate-likeness presents a unique challenge.

Gaps in Machine Competencies for HMT

It may not be necessary for machines to possess all human socio-emotional competencies to be effective teammates. However, the creation of certain capabilities allows machines to develop attitude-based competencies that have been identified as critical for the optimization of teams (c.f. Salas et al., 2009), such as cohesion and mutual (rather than one-way) trust (Groom and Nass, 2007). Recent work in team science has emphasized three team competencies that are transportable across contexts (Salas et al., 2018), namely communication, coordination, and adaptability (hereafter referred to as adaptation). These competencies, referred to as transportable teamwork competencies, are applicable in any effective team, regardless of the team or task environment (Salas et al., 2018).

Although team researchers discuss communication, coordination, and adaptation strictly in the realm of human teams (Salas et al., 2018), others have highlighted their importance to HMT (Stowers et al., 2017; Seeber et al., 2020). These competencies are considered universally relevant collaborative processes (Salas et al., 2018; Seeber et al., 2020). Due to wide applicability across team and tasks types, we feature them in this perspective piece as critical areas, where additional technological advancements could improve HMT performance on a large scale. Here, we describe the state of the science on these competencies in HMT and identify gaps where new technologies can provide benefit.

Communication

Communication, which refers the process of exchanging information between teammates (Salas et al., 2009), is important for team performance as it contributes to the development and maintenance of SMMs and the successful execution of many necessary team processes, including planning and mission analysis (Salas et al., 2009). In HMT research, the process of information exchange between humans and machines has been examined via the concept of transparency, defined as “the quality of an interface (e.g., visual, linguistic) pertaining to its abilities to afford an operator’s comprehension about an intelligent agent’s intent, performance, future plans, and reasoning process” (Chen et al., 2014, p. 2).

Although perfect transparency has yet to be realized in HMT (Nam and Lyons, 2020), machines have gained the capacity to share information with humans and coordinate more effectively in joint tasks (Lyons, 2013; Chen et al., 2014, 2018). This includes using turn-taking (Chao and Thomaz, 2016), which is integral to human-machine fluency (Hoffman, 2019). Features such as turn-taking and the ability to recognize human language (Tellex et al., 2020) can enhance the bidirectionality of communication between humans and machines, making HMT in general more teamlike and efficient (Chen et al., 2018).

Researchers studying effective HMT have identified that the trust a human member has in a machine is a critical factor in successful team communication (Nam and Lyons, 2020) as well as HMT fluency (Hoffman, 2019). To that end, researchers have looked at how communication can promote trust, and established guidelines for the effective design of information for promoting trust and overall performance (Chen et al., 2014; Sanneman and Shah, 2020). In addition to guidelines regarding the quantity and design of information, researchers have investigated the quality of information shared by HMT members and have applied frameworks of human situation awareness to understand and improve communication processes, such as the mitigation of confusion (Chen et al., 2014; Stowers et al., 2020).

Despite improvements in communication-related abilities, not all machines apply these skills with equal success, leaving gaps between high-performing machines and high-performing HMTs. For example, machines utilizing neural networks tend to have higher performance than other machines, but generally utilize poorer communication due to their use of distributed statistical representations (Nam and Lyons, 2020). Teaming done with machines utilizing neural networks would benefit from an improvement in explainability (Gunning and Aha, 2019). Technological advancements in the area of explainable AI (XAI) show promise for enhancing transparency between humans and machines that utilize neural networks. In short, XAI is AI that can be understood by humans (Gunning and Aha, 2019; Sanneman and Shah, 2020). A goal of XAI research (Gunning and Aha, 2019) is to identify approaches for communicating AI models and their inferences in a format that human operators can comprehend. This research exposes recent XAI-afforded competencies (e.g., transparency), which contributes to understanding and trust between human(s) and machine(s) (e.g., Nam and Lyons, 2020), thus impacting HMT communication.

Communication in HMT must be a two-way street. Machines must be able to effectively share information in a way that humans can understand, but to do so means that machines must possess the ability to accurately model human comprehension of information. This ties in closely with the second transportable teamwork competency: coordination.

Coordination

While communication refers to the information-sharing process, coordination in human teams refers to the organization of team members’ knowledge, skills, and behaviors to meet a specific goal (Salas et al., 2009). In HMT literature, coordination is defined as the process through which humans and machines manage “dependencies between activities” (Malone and Crowston, 1990). In effective team coordination, task-relevant information is communicated in a timely manner, while unnecessary communication is avoided. In this way, effective communication processes can be seen as necessary but not sufficient for effective HMT coordination.

Human-machine teaming scholars have identified three requirements for effective coordination (Klein et al., 2005): members must (1) each be reliable and able to predict each other’s behaviors, (2) possess common historical and present knowledge (Clark, 1996), and (3) be able to re-direct or help each other in tasks (Christoffersen and Woods, 2002). To this end, a machine is considered an effective coordinator if it is reliable, directable, able to communicate intentions, and able to recognize status and intentions of other team members (Klein et al., 2005). These qualities allow the machine to be able to engage in the communication and creation of SMM needed for successful coordination (Matthews et al., 2021).

The primary gap in the development of coordination in HMT lies in the degree to which machines can engage in implicit coordination. Implicit coordination, which refers to the process of synchronizing team member actions based on assumptions of what each teammate is most likely to do (Wittenbaum and Stasser, 1996), is helpful in high workload situations as it reduces “communication overhead” (MacMillan et al., 2004), and allows teammates to focus on the task at hand with minimal distraction. While machines currently possess the ability to detect certain implicit cues; e.g., via the recognition of facial expressions (Picard et al., 2001), they are limited in their ability to detect contextual cues. For example, because coordination involves a complex and varying presentation of implicit communication cues (Lackey et al., 2011), it is difficult and expensive to support machine cue perception on a human level. Detection, interpretation, and reasoning about these cues from a human perspective (Baker et al., 2011) is imperative to ensure effective coordination.

To this end, machines would benefit from developing a theory of mind (Baker et al., 2011). This would translate observations of teammates’ behaviors into a computational model of what they know/do not know, what their goals and preferences are, what capabilities they have, and what behaviors they might take next. Such a model could then be employed to simulate what a teammate might do in different situations or what options they would prefer their machine teammate to take. This kind of capability would support implicit coordination with teammates by enabling the machine to anticipate its teammates’ behaviors and expectations and then to adapt its own behavior to align with those. We elaborate more on adaptation next, including the state of the science and related opportunities for improvement.

Adaptation

Adaptation in HMT has been examined in two ways: (1) adaptability (i.e., human-controlled adaptation; Miller et al., 2005) and (2) adaptiveness (i.e., machine-controlled adaptation). For example, adaptability can be achieved by supporting human choice regarding the machine’s role, behavioral parameters, and level of autonomy. The human might decide a machine teammate’s tasking order, such as choosing the next lesson given by an intelligent tutoring system (Chou et al., 2015). However, research has shown that humans do not always effectively allocate tasking to automated systems (Lin et al., 2019). To overcome this human limitation, machines can engage adaptiveness by prompting a human to take control of a task (Kaber et al., 2005), or by assuming control in cases of suboptimal task management by human teammates. However, the latter solution poses a threat to human agency (Wohleber et al., 2020) and must be carefully executed.

In team science, adaptation or adaptability as a teamwork competency is examined more broadly and refers to the adjustment of strategies and behaviors in response to changes in the team’s circumstances (Driskell et al., 2018). In considering adaptation through this lens, machines are capable of detecting changes in the internal team and external environments (Lackey et al., 2011), allowing them to engage both adaptive and adaptable mechanisms as designed. They may also detect some underlying causes of changing environments through common sense reasoning (Morgenstern et al., 2016), though this capability remains limited by the datasets used to train common sense (Hao, 2020).

The limitation of machines having to rely on datasets to train common sense has led researchers to explore “third wave AI” capabilities for informing machine knowledge and adaptation. The Defense Advanced Research Projects Agency (DARPA) has argued that to move beyond knowledge-based AI methods (first wave) and statistical machine learning AI methods (second wave), we need approaches that can integrate both first and second wave methods to support contextual understanding and adaptation (third wave, Launchbury, 2017). Most machine learning systems operate by identifying correlational relationships between variables. In contrast, as part of third wave AI, causal and counterfactual models (Pearl, 2019) aim to understand the causal relationships between variables. By modeling causal relationships, machines can better support counterfactual inference; i.e., they can generalize from observed operating conditions to unobserved conditions.

In the last section, we suggested that developing theory of mind in machines (Baker et al., 2011) would be beneficial to HMT coordination and adaptation. For the adaptation piece to be fully realized, it is necessary for machines to be able to not only recognize their teammates’ knowledge and behaviors, but also anticipate and respond to new knowledge and behaviors should they arise. Machines that create and apply causal links to new scenarios should be able to do this and therefore be more effective at modifying their behaviors to engage in a truly adaptive team.

Conclusion and a Path Forward

Communication, coordination, and adaptation have been identified as critical to the success of both human teaming and HMT, but machines have yet to fully realize the uniquely human cognitive abilities that are necessary for effective teaming (Matthews et al., 2021). However, there are new AI capabilities that could allow machines to maximize these competencies. These capabilities offer a means for machines to begin meeting the requirements needed for effective collaboration with humans.

The capabilities afforded by recent technological advancements show promise for allowing machines to possess the transportable teamwork competencies identified as universally critical to teams (Salas et al., 2018; Seeber et al., 2020). For example, machines might leverage a theory of mind reasoning capability to build a computational model of their teammate based on observations of their behavior. This model might be used to infer what teammates know, what information they have available to make decisions, and what they are likely to do next – enabling better implicit coordination with humans. This theory of mind model, in conjunction with the ability to generate human-explainable outputs (via XAI), will also enable machines to determine when and how best to communicate with teammates, further enhancing the trust and ability that affords machine teammate-likeness. Finally, the creation of causal and counterfactual inference capabilities unique to third wave AI will allow machines to be truly adaptive teammates that possess the ability to recognize and reason about the underlying factors that produce changes in the HMT and environment.

While these new technological approaches show promise, more work is needed to refine them to the level required for effective teamwork. Current AI research focuses on developing specific learning and performance capabilities and often does not incorporate findings or insights from the teaming and HMT literature. For example, consider OpenAI Five, a team of five trained neural-network models that can coordinate together to beat a team of five top human champions at Dota 2, a multiplayer online battle arena game (Berner et al., 2019). While the five machines were able to coordinate effectively with each other, a subsequent match with a HMT showed the machines performed worse when partnered with humans. In examining this phenomenon, Carroll et al. (2019) found that the machines were limited by the knowledge they gained from initial training with fellow machines.

Contrastingly, if the OpenAI Five possessed the capabilities outlined in this perspective piece (XAI, theory of mind, and third wave counterfactual prediction), then they should better support communication, coordination, and adaptation with humans. As this example shows, more work is needed to better understand how insights from human teaming and HMT might be integrated into the development of these emerging machines. Increasing collaboration between computer scientists and HMT researchers in examining these insights would be beneficial. With machines operating at new levels of sophistication, true HMT may become possible at a larger scale than seen before.

Author Contributions

KS, LB, CM, RW, and ES contributed to the writing and content of this paper. All authors contributed to the article and approved the submitted version.

Funding

This work was supported under the DARPA TAILOR program (award no. HR00111990055).

Conflict of Interest

RW was employed by company Soar Technology, Inc.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The views, opinions, and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the United States Government.

Footnotes

1. ^We refer to human-machine teams (HMT) as humans and machines working together to accomplisha goal, with machines being autonomous enough to engage in decision making (Seeber et al., 2020). Limited forms of HMT have already begun, but additional sophistication still needs to be achievedbefore machines can be considered true teammates (Sukthankar et al., 2012).

References

Baker, C., Saxe, R., and Tenenbaum, J. (2011). “Bayesian theory of mind: Modeling joint belief-desire attribution,” in Proceedings of the Annual Meeting of the Cognitive Science Society; July 20–23, 2011; No. 33.

Google Scholar

Berner, C., Brockman, G., Chan, B., Cheung, V., Dębiak, P., Dennison, C., et al. (2019). Dota 2 with large scale deep reinforcement learning. arXiv [Preprint].

PubMed Abstract | Google Scholar

Carroll, M., Shah, R., Ho, M. K., Griffiths, T., Seshia, S., Abbeel, P., et al. (2019). “On the utility of learning about humans for human-AI coordination,” in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019. eds. H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. A. Fox and R. Garnett. December 8–14, 2019; Vancouver, BC, Canada, 5175–5186.

Google Scholar

Chao, C., and Thomaz, A. (2016). Timed petri nets for fluent turn-taking over multimodal interaction resources in human-robot collaboration. Int. J. Robot. Res. 35, 1330–1353. doi: 10.1177/0278364915627291

CrossRef Full Text | Google Scholar

Chen, J. Y., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., and Barnes, M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theor. Issues Ergon. Sci. 19, 259–282. doi: 10.1080/1463922X.2017.1315750

CrossRef Full Text | Google Scholar

Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., and Barnes, M. (2014). Situation Awareness–Based Agent Transparency. Report No. ARL-TR-6905. Aberdeen Proving Ground, MD: U.S. Army Research Laboratory.

Google Scholar

Cherns, A. (1976). The principles of sociotechnical design. Hum. Relat. 29, 783–792. doi: 10.1177/001872677602900806

CrossRef Full Text | Google Scholar

Chou, C. Y., Lai, K. R., Chao, P. Y., Lan, C. H., and Chen, T. H. (2015). Negotiation based adaptive learning sequences: combining adaptivity and adaptability. Comput. Educ. 88, 215–226. doi: 10.1016/j.compedu.2015.05.007

CrossRef Full Text | Google Scholar

Christoffersen, K., and Woods, D. D. (2002). “1. How to make automated systems team players,” in Advances in Human Performance and Cognitive Engineering Research. Vol. 2. ed. E. Salas (Bingley: Emerald Group Publishing Limited), 1–12.

Google Scholar

Clark, H. (1996). Using Language. Cambridge: Cambridge University Press.

Google Scholar

Davenport, T. H. (2018). The AI advantage: How to Put the Artificial Intelligence Revolution to Work. Cambridge, MA: MIT Press.

Google Scholar

Dekker, S. W. A., and Woods, D. D. (2002). MABA-MABA or abracadabra? Progress on human-automation co-ordination. Cogn. Tech. Work 4, 240–244. doi: 10.1007/s101110200022

CrossRef Full Text | Google Scholar

Driskell, J. E., Salas, E., and Driskell, T. (2018). Foundations of teamwork and collaboration. Am. Psychol. 73, 334–348. doi: 10.1037/amp0000241

PubMed Abstract | CrossRef Full Text | Google Scholar

Eason, K. (2009). Before the internet: the relevance of socio-technical systems theory to emerging forms of virtual organisation. Int. J. Sociotechnol. Knowled. Dev. 1, 23–32. doi: 10.4018/jskd.2009040103

CrossRef Full Text | Google Scholar

Erol, B. A., Majumdar, A., Benavidez, P., Rad, P., Choo, K.-K. R., and Jamshidi, M. (2020). Toward artificial emotional intelligence for cooperative social human–machine interaction. IEEE Transact. Comput. Soc. Syst. 7, 234–246. doi: 10.1109/TCSS.2019.2922593

CrossRef Full Text | Google Scholar

Fitts, P. M. (1951). Human Engineering for an Effective Air-Navigation and Traffic-Control System. Washington, DC: National Research Council, Division of Anthropology and Psychology, Committee on Aviation Psychology.

Google Scholar

Groom, V., and Nass, C. (2007). Can robots be teammates? Benchmarks in human-robot teams. Interact. Stud. 8, 493–500. doi: 10.1075/is.8.3.10gro

CrossRef Full Text | Google Scholar

Gunning, D., and Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40, 44–58. doi: 10.1609/aimag.v40i2.2850

CrossRef Full Text | Google Scholar

Hao, K. (2020). AI Still Doesn’t Have the Common Sense to Understand Human Language. MIT Technology Review. Available at: https://www.technologyreview.com/2020/01/31/304844/ai-common-sense-reads-human-language-ai2/ (Accessed December 08, 2020).

Google Scholar

Hoffman, G. (2019). Evaluating fluency in human–robot collaboration. IEEE Transact. Hum. Mach. Syst. 49, 209–218. doi: 10.1109/THMS.2019.2904558

CrossRef Full Text | Google Scholar

Kaber, D. B., Wright, M. C., Prinzel, L. J., and Clamann, M. P. (2005). Adaptive automation of human-machine system information processing functions. Hum. Factors 47, 730–741. doi: 10.1518/001872005775570989

PubMed Abstract | CrossRef Full Text | Google Scholar

Klein, G., Feltovich, P. J., Bradshaw, J. M., and Woods, D. D. (2005). “Common ground and coordination in joint activity,” in Organizational Simulation. Vol. 53. eds. W. R. Rouse and K. B. Boff (New York, NY: Wiley), 139–184.

Google Scholar

Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., and Feltovich, P. J. (2004). Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intell. Syst. 19, 91–95. doi: 10.1109/MIS.2004.74

CrossRef Full Text | Google Scholar

Knight, W. (2017). Your Best Teammate Might Someday be an Algorithm. MIT Technology Review. Available at: https://www.technologyreview.com/2017/07/10/4384/your-best-teammate-might-someday-be-an-algorithm/ (Accessed December 08, 2020).

Google Scholar

Lackey, S., Barber, D., Reinerman, L., Badler, N. I., and Hudson, I. (2011). Defining next-generation multi-modal communication in human robot interaction. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 55, 461–464. doi: 10.1177/1071181311551095

CrossRef Full Text | Google Scholar

Launchbury, J. (2017). A DARPA Perspective on Artificial Intelligence [PowerPoint slides]. Defense Advanced Research Projects Agency. Available at: https://www.darpa.mil/about-us/darpa-perspective-on-ai (Accessed April 30, 2021).

Google Scholar

Lin, J., Matthews, G., Wohleber, R. W., Funke, G. J., Calhoun, G. L., Ruff, H. A., et al. (2019). Overload and automation-dependence in a multi-UAS simulation: task demand and individual difference factors. J. Exp. Psychol. Appl. 26, 218–235. doi: 10.1037/xap0000248

PubMed Abstract | CrossRef Full Text | Google Scholar

Lyons, J. B. (2013). Being transparent about transparency: a model for human-robot interactions. AAAI Spring Symposium: Trust and Autonomous Systems.

Google Scholar

MacMillan, J., Entin, E., and Serfaty, D. (2004). “Communication overhead: the hidden cost of team cognition,” in Team Cognition: Understanding the Factors that Drive Process and Performance. eds. E. Salas and S. M. Fiore (Washington, DC: American Psychological Association), 61–82.

Google Scholar

Malone, T. W., and Crowston, K. (1990). “What is coordination theory and how can it help design cooperative work systems?” in Presented at the Conference on Computer-Supported Cooperative Work, 7 October, 1990, Los Angeles, CA, 357–370.

Google Scholar

Matthews, G., Panganiban, A. R., Lin, J., Long, M., and Schwing, M. (2021). “Super-machines or sub-humans: mental models and trust in intelligent autonomous systems,” in Trust in Human-Robot Interaction. eds. C. S. Nam and J. B. Lyons (Academic Press), 59–82.

Google Scholar

Miller, C. A., Funk, H., Goldman, R., Meisner, J., and Wu, P. (2005). “Implications of adaptive vs. adaptable UIs on decision making: Why “automated adaptiveness” is not always the right answer,” in Proceedings of the 1st International Conference on Augmented Cognition; July 22–27, 2005.

Google Scholar

Morgenstern, L., Davis, E., and Ortiz, C. L. (2016). Planning, executing, and evaluating the winograd schema challenge. AI Mag. 37, 50–54. doi: 10.1609/aimag.v37i1.2639

CrossRef Full Text | Google Scholar

Nam, C. S., and Lyons, J. B. (eds.) (2020). Trust in Human-Robot Interaction (Academic Press).

Google Scholar

Nass, C., Moon, Y., Fogg, B. J., Reeves, B., and Dryer, C. (1995). “Can computer personalities be human personalities?” in Conference Companion on Human Factors in Computing Systems; May 1995; 228–229.

Google Scholar

Ososky, S., Schuster, D., Jentsch, F., Fiore, S., Shumaker, R., Lebiere, C., et al. (2012). “The importance of shared mental models and shared situation awareness for transforming robots form tools to teammates,” in Proceedings of SPIE 8387, Unmanned Systems Technology XIV; April 25–27, 2012; 838710–838711.

Google Scholar

Parasuraman, R., and Riley, V. (1997). Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39, 230–253. doi: 10.1518/001872097778543886

CrossRef Full Text | Google Scholar

Parasuraman, R., Sheridan, T. B., and Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transact. Syst. Man Cybern. A Syst. Hum. 30, 286–297. doi: 10.1109/3468.844354

PubMed Abstract | CrossRef Full Text | Google Scholar

Pearl, J. (2019). The seven tools of causal inference, with reflections on machine learning. Commun. ACM 62, 54–60. doi: 10.1145/3241036

CrossRef Full Text | Google Scholar

Phillips, E., Ososky, S., Grove, J., and Jentsch, F. (2011). From tools to teammates: toward the development of appropriate mental models for intelligent robots. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 55, 1491–1495. doi: 10.1177/1071181311551310

CrossRef Full Text | Google Scholar

Picard, R. W., Vyzas, E., and Healey, J. (2001). Toward machine emotional intelligence: analysis of affective physiological state. IEEE Transact. Patt. Anal. Mach. Intell. 23, 1175–1191. doi: 10.1109/34.954607

CrossRef Full Text | Google Scholar

Pina, P. E., Cummings, M. L., Crandall, J. W., and Penna, M. D. (2008). “Identifying generalizable metric classes to evaluate human-robot teams,” in Proceedings of the 3rd Annual Conference on Human–Robot Interaction, New York, NY: ACM; March 12–15, 2008; 13–20.

Google Scholar

Russell, S., and Norvig, P. (2009). Artificial Intelligence: A Modern Approach. 3rd Edn. Upper Saddle River, NJ: Prentice Hall.

Google Scholar

Salas, E., Reyes, D. L., and McDaniel, S. H. (2018). The science of teamwork: progress, reflections, and the road ahead. Am. Psychol. 73, 593–600. doi: 10.1037/amp0000334

PubMed Abstract | CrossRef Full Text | Google Scholar

Salas, E., Rosen, M. A., Burke, C. S., and Goodwin, G. F. (2009). “The wisdom of collectives in organizations: An update of the teamwork competencies,” in Team Effectiveness in Complex Organizations. Cross-Disciplinary Perspectives and Approaches. eds. E. Salas, G. Goodwin, and C. S. Burke (New York, NY: Psychology Press), 39–79.

Google Scholar

Salovey, P., and Mayer, J. D. (1990). Emotional intelligence. Imagin. Cogn. Pers. 9, 185–211. doi: 10.2190/DUGG-P24E-52WK-6CDG

CrossRef Full Text | Google Scholar

Sanneman, L., and Shah, J. A. (2020). “A Situation Awareness-Based Framework for Design and Evaluation of Explainable AI,” in International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems; May 9–13, 2020; Cham: Springer, 94–110.

Google Scholar

Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., and Hancock, P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum. Factors 58, 377–400. doi: 10.1177/0018720816634228

PubMed Abstract | CrossRef Full Text | Google Scholar

Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G. J., Elkins, A., et al. (2020). Machines as teammates: a research agenda on AI in team collaboration. Inf. Manag. 57:103174. doi: 10.1016/j.im.2019.103174

CrossRef Full Text | Google Scholar

Stowers, K., Kasdaglis, N., Rupp, M. A., Newton, O. B., Chen, J. Y., and Barnes, M. J. (2020). The IMPACT of agent transparency on human performance. IEEE Transact. Hum. Mach. Syst. 50, 245–253. doi: 10.1109/THMS.2020.2978041

CrossRef Full Text | Google Scholar

Stowers, K., Oglesby, J., Sonesh, S., Leyva, K., Iwig, C., and Salas, E. (2017). A framework to guide the assessment of human–machine systems. Hum. Factors 59, 172–188. doi: 10.1177/0018720817695077

PubMed Abstract | CrossRef Full Text | Google Scholar

Sukthankar, G., Shumaker, R., and Lewis, M. (2012). “Intelligent agents as teammates,” in Theories of Team Cognition: Cross-Disciplinary Perspectives. eds. E. Salas, S. M. Fiore, and M. P. Letsky (New York, NY: Routledge), 313–343.

Google Scholar

Tellex, S., Gopalan, N., Kress-Gazit, H., and Matuszek, C. (2020). Robots that use language. Annu. Rev. Control Robot. Auton. Syst. 3, 25–55. doi: 10.1146/annurev-control-101119-071628

CrossRef Full Text | Google Scholar

The Future of Work (2019). A VICE News Special Report. Reported by Krishna Andavolu (Vice News Correspondent), Vice Media, HBO. Available at: https://www.youtube.com/watch?v=_iaKHeCKcq4

Google Scholar

Trist, E. (1981). “The evolution of socio-technical systems,” in Perspectives on Organizational Design and Behaviour. eds. A. Van de Ven and W. Joyce (Wiley Interscience).

Google Scholar

Trist, E. L., and Bamforth, K. W. (1951). Some social and psychological consequences of the longwall method of coal-getting: an examination of the psychological situation and defences of a work group in relation to the social structure and technological content of the work system. Hum. Relat. 4, 3–38. doi: 10.1177/001872675100400101

CrossRef Full Text | Google Scholar

Wittenbaum, G. M., and Stasser, G. (1996). “Management of information in small groups,” in What’s Social About Social Cognition? Research on Socially Shared Cognition in Small Groups. eds. J. L. Nye and A. M. Brower (Thousand Oaks, CA: Sage), 3–28.

Google Scholar

Wohleber, R. W., Stowers, K., Chen, J. Y. C., and Barnes, M. (2020). “Conducting polyphonic human-robot communication: mastering crescendos and diminuendos in transparency,” in Advances in Simulation and Digital Human Modeling. Advances in Intelligent Systems and Computing. Vol. 1206. eds. D. Cassenti, S. Scataglini, S. Rajulu, and J. Wright (Cham: Springer), 10–17.

Google Scholar

Wynne, K. T., and Lyons, J. B. (2018). An integrative model of autonomous agent teammate-likeness. Theor. Issues Ergon. Sci. 19, 353–374. doi: 10.1080/1463922X.2016.1260181

CrossRef Full Text | Google Scholar

Keywords: human-machine team, artificial intelligence, third wave AI, explainable AI, teamwork

Citation: Stowers K, Brady LL, MacLellan C, Wohleber R and Salas E (2021) Improving Teamwork Competencies in Human-Machine Teams: Perspectives From Team Science. Front. Psychol. 12:590290. doi: 10.3389/fpsyg.2021.590290

Received: 31 July 2020; Accepted: 15 April 2021;
Published: 24 May 2021.

Edited by:

Gerald Matthews, University of Central Florida, United States

Reviewed by:

Corey Fallon, Pacific Northwest National Laboratory (DOE), United States
April Rose Panganiban, Air Force Research Laboratory, United States

Copyright © 2021 Stowers, Brady, MacLellan, Wohleber and Salas. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kimberly Stowers, kim.stwrs@gmail.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.