AUTHOR=Hoffman Robert R. , Mueller Shane T. , Klein Gary , Jalaeian Mohammadreza , Tate Connor TITLE=Explainable AI: roles and stakeholders, desirements and challenges JOURNAL=Frontiers in Computer Science VOLUME=Volume 5 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2023.1117848 DOI=10.3389/fcomp.2023.1117848 ISSN=2624-9898 ABSTRACT=The purpose of the Stakeholder Playbook is to enable the developers of explainable AI systems to take into account the different ways in which different stakeholders or role-holders need to 'look inside' of the AI/XAI systems. We conducted structured cognitive interviews with senior and midcareer professionals having had direct experience either developing or using AI and/or autonomous systems. The results show that role holders need access to others (e.g., trusted engineers, trusted vendors) in order for them to be able to develop satisfying mental models of AI systems. They need to know How it fails and How it misleads as much as they need to know How it works. Some stakeholders need to develop an understanding that enables them to explain the AI to someone else and not just satisfy their own sensemaking requirements. Only about half of our interviewees said they always wanted explanations, or even needed better explanations than the ones that were provided. This and other findings seem surprising if not paradoxical, but they can be resolved by acknowledging that different role holders have differing skill sets and have different sensemaking desirements. Individuals often serve in multiple roles and therefore can have different immediate goals. Based on our empirical evidence, we created a "Playbook" that lists Explanation Desirements, Explanation Challenges, and Explanation Cautions for a variety of stakeholder groups and roles. The goal is to help XAI developers by providing guidance for the development process and for creating explanations that support the different roles.