AUTHOR=Chen Na , Zhang Xinyue TITLE=When misunderstanding meets artificial intelligence: the critical role of trust in human–AI and human–human team communication and performance JOURNAL=Frontiers in Psychology VOLUME=Volume 16 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1637339 DOI=10.3389/fpsyg.2025.1637339 ISSN=1664-1078 ABSTRACT=IntroductionAs artificial intelligence (AI) technologies become increasingly integrated into organizational teamwork, managing communication breakdowns in human–AI collaboration has emerged as a significant managerial challenge. Although AI-empowered teams often achieve enhanced efficiency, misunderstandings—especially those caused by AI agents during information exchange—can undermine team trust and impair performance. The mechanisms underlying these effects remain insufficiently explored.MethodsGrounded in evolutionary psychology and trust theory, this study employed a 2 (team type: human–AI vs. human–human) × 2 (misunderstanding type: information omission vs. ambiguous expression) experimental design. A total of 126 valid participants were assigned to collaboratively complete a planning and writing task for a popular science social media column with their respective teammates.ResultsThe findings indicate that information omissions caused by AI agents significantly reduce team trust, which in turn hinders communication efficiency and overall performance. Conversely, the negative impact of ambiguous expressions is moderated by the level of team trust; teams with higher trust demonstrate greater adaptability and resilience. Moderated mediation analyses further reveal that team type influences the dynamic pathway from misunderstanding to trust and performance.DiscussionThis research advances theoretical understanding of misunderstanding management in human–AI teams and provides practical insights for optimizing AI systems and fostering effective human–machine collaboration.