ORIGINAL RESEARCH article
Front. Organ. Psychol.
Sec. Performance and Development
Volume 3 - 2025 | doi: 10.3389/forgp.2025.1419403
This article is part of the Research TopicAffective and Behavioral Dynamics in Human-Technology Interactions of Industry 5.0View all 5 articles
Trust and AI weight: human-AI collaboration in organizational management decision-making
Provisionally accepted- Jinan University, Guangzhou, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The emergence of Artificial Intelligence (AI) has revolutionized decision-making in human resource management. Since human and AI each possesses distinct strengths in the realm of decision-making, the synergy between human and AI agent has the potential to significantly enhance both the efficiency and the quality of managerial decision-making processes. Although assigning decision weights to AI agents presents innovative avenues for human-AI collaboration, the underlying mechanisms driving the allocation of decision weights to AI agents remain inadequately understood. To elucidate these mechanisms, this paper examines the influence of trust in AI on AI weight allocation within the framework of human-AI cooperation, leveraging the Socio-Cognitive Model of Trust (SCMT). We conducted a series of survey studies involving scenario-based decision-making tasks. Study 1 examined the relationship between trust in AI and AI weight among 111 managers about employee recruitment tasks. The results indicated that trust in AI enhances the decisional weight attributed to AI agents, and willingness to collaborate with AI mediates trust in AI and the weight of AI in personnel selection. Study 2 surveyed 210 managers using employee performance evaluation tasks. The findings revealed that the perceived free will of AI agents negatively moderates the relationship between trust in AI and willing to collaborate with AI, such that the relationship is weaker when individuals perceive a higher degree of free will in AI agents than a lower degree. Theoretically, this paper advances the understanding of the function of trust in human-AI interaction by exploring the trust development from attitude to act in human-AI cooperative decision-making. Practically, it offers valuable insights into the design of AI agent and organizational management within the context of human-AI collaboration.
Keywords: human-AI cooperation, AI weight, Trust, Socio-Cognitive Model of Trust, decision-making
Received: 18 Apr 2024; Accepted: 20 May 2025.
Copyright: © 2025 Wang and Chen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Xiaoxi Chen, Jinan University, Guangzhou, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.