AUTHOR=Gillespie Tony TITLE=Building trust and responsibility into autonomous human-machine teams JOURNAL=Frontiers in Physics VOLUME=Volume 10 - 2022 YEAR=2022 URL=https://www.frontiersin.org/journals/physics/articles/10.3389/fphy.2022.942245 DOI=10.3389/fphy.2022.942245 ISSN=2296-424X ABSTRACT=Dynamic human-machine teams can reallocate tasks between the human user responsible for its outputs, humans in the team and automated assets. This is essential when any team member becomes overloaded with a consequent risk of mishap. Potentially artificial intelligence, including machine-learning will aid identification of overloads and be able to redistribute assets between tasks in a manner which maximises overall efficiency. When an AI system can make decisions and act on them without human intervention, it implies that the user is comfortable with taking responsibility for all its actions and can justify its actions after the event. Thus is a very high threshold with high requirements for many factors such as: reliability; robustness; resistance to attack; transparency; predictability; data security; and protection against incorrect use. The threshold can be lowered if the system makes predictions about the consequences of its actions, then if there is doubt about the effect of the action, an option to refer to the human user before acting both increases safety and increases mutual trust. A system engineering approach, using the 4D/RCS architecture is taken with the aim of demonstrating that a trusted system is feasible and how the human user can develops trust by the use of internal features. The architecture has a predictive component and is hierarchical so limits on responsibilities can be set for every level, forcing nodes at that level to refer to a higher one when necessary. The higher system levels are developed, covering the user, their Human Machine Interface (HMI), dynamic automated task allocation, and task management of assets. The decision-making process in a node is developed, showing how options for action are developed and chosen with defined limits on actions which can be taken at that level by every node, human or automated. AI can be introduced at each node so even when non-deterministic decision-making occurs, the resultant system behaviour will still be subject to the same limits as before. The designer could change the design if this gives greater advantage but by using this architecture structure, wider limits can be introduced and their implications assessed.