Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. AI for Human Learning and Behavior Change

This article is part of the Research TopicAI Behavioral Science: Understanding, Modeling, and Aligning AI BehaviorsView all articles

Modelling Societal Preferences for Automated Vehicle Behaviour with Ethical Goal Functions

Provisionally accepted
Chloe  GrosChloe Gros1*Leon  KesterLeon Kester2Marieke  MartensMarieke Martens2,3Peter  WerkhovenPeter Werkhoven2,4
  • 1Universiteit Utrecht Faculteit Betawetenschappen, Utrecht, Netherlands
  • 2TNO, Utrecht, Netherlands
  • 3Technische Universiteit Eindhoven, Eindhoven, Netherlands
  • 4Universiteit Utrecht, Utrecht, Netherlands

The final, formatted version of the article will be published soon.

As automated vehicles (AVs) assume greater decision-making responsibilities, ensuring their alignment with societal values is critical. This study develops an Ethical Goal Function (EGF)—a quantitative model encoding societal moral preferences for AV decision-making—within the framework of Augmented Utilitarianism (AU), which integrates consequentialist, deontological, and virtue-ethical principles while adapting to societal values. The EGF was constructed through discrete choice experiments (DCEs) with Dutch university students (N = 89), who evaluated scenarios involving six ethically relevant attributes: physical harm, psychological harm, moral responsibility, fair innings, legality, and environmental harm. These attributes were selected from biomedical ethics and moral psychology and validated in prior studies as highly relevant to AV contexts. A multinomial logit model yielded attribute weights with an average predictive accuracy of 63.8% (SD = 3.3%) across a 5-fold cross-validation. We propose embedding the EGF in a Socio-Technological Feedback (SOTEF) Loop, a stakeholder-informed process that continuously integrates societal input into AV design, enabling dynamic refinement. While previous research has conceptually outlined ethical frameworks for AVs, this study provides the first empirical operationalization of these frameworks through EGFs and introduces its integration into the Socio-Technological Feedback (SOTEF) Loop as a mechanism for continuous societal alignment. This dual contribution advances both the theoretical and practical implementation of human-centered ethics in automated decision-making. Limitations include the cultural specificity of the Dutch sample and the reliance on textual presentation; future work should broaden the cultural scope and systematically compare presentation modes.

Keywords: automated vehicles, ethical decision-making, Ethical Goal Functions, discrete choice modelling, human-centered AI, AV Ethics

Received: 30 Jul 2025; Accepted: 21 Nov 2025.

Copyright: © 2025 Gros, Kester, Martens and Werkhoven. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Chloe Gros, c.n.gros@uu.nl

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.