AUTHOR=Hoffman Robert R. , Mueller Shane T. , Klein Gary , Litman Jordan TITLE=Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance JOURNAL=Frontiers in Computer Science VOLUME=Volume 5 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2023.1096257 DOI=10.3389/fcomp.2023.1096257 ISSN=2624-9898 ABSTRACT=If a user is presented an AI system that portends to explain how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI? In other words, how do we know that an explainable AI system (XAI) is any good? This entails some key concepts of measurement. We present specific methods for enabling developers and researchers to: (1) Assess the a priori goodness of explanations, (2) Assess users' satisfaction with explanations, (3) Assess user's understanding of an AI system, (4) Assess user's curiosity and need for explanations, (5) Assess whether the user's trust and reliance on the AI are appropriate, and finally, (6) Evaluate how the human-XAI work system performs. The methods we present derives from our integration of extensive research literatures and our own psychometric evaluations.