Abstract
This paper examines people’s lay belief concerning the mind of an artificial intelligence (AI) as a decision-making agent and how this belief shapes an individual’s own decision-making style in response. People perceive AI as more rational and reason-driven, in contrast to viewing humans as emotionally driven. Two studies confirm these beliefs, showing participants consistently judge AI as reason-based and humans as emotion-driven in decision-making. In a subsequent study, participants engage in an economic ultimatum game. When participants thought they were interacting with an AI (vs. a human) competitor, they adopted a more economically rational decision-making style, moving closer to the game-theoretic optimum. This shift in decision-making style was mediated by participants’ belief in the rational nature of AI. The findings suggest that perceptions of AI’s decision-making tendencies can influence the cognitive strategies that are adopted in response, with potential implications for human-AI interactions.
1 Introduction
Artificial intelligence (AI) systems—computational systems that adapt to information from the environment to perform actions (Russel and Norvig, 2020)—have become ubiquitous in business, government, and society. Despite the many technical forms (such as LLMs, Learning Algorithms, GPTs, Agentic bots, and Chatbots) that laypeople may broadly categorize as “AI” and the various ways they may interact with or encounter such entities, laypeople’s general understanding and appraisal of AI as a decision-maker need to be investigated now more than ever—particularly given the increasing adoption of AI.
There is a push for greater inclusion of AI decision-making in several areas relevant to the lay public. Scaling its implementation is a pressing priority for most business organizations (Deloitte AI Institute, 2024), and among governments that have adopted AI systems, 60% intend for such systems to impact real-time decision-making (Deloitte AI Institute, 2021). A majority of Europeans surveyed—and 60% of those aged 25-34—indicate support for replacing national parliamentarians with algorithmic decision-makers (Jonsson and de Luca Tena, 2021). Although early work primarily examined whether and when people are more likely to adopt or avoid recommendations by AI agents (e.g., Bigman and Gray, 2018; Castelo et al., 2019; Dietvorst and Bharti, 2020; Leung et al., 2018; Longoni et al., 2019), reflecting the growing interest in AI uptake across public and private sectors, more recent research has shifted toward how people perceive AI recommendations (e.g., Yalcin et al., 2022; Yu et al., 2024) and how human and AI agents make decisions or work collaboratively (e.g., Zhang et al., 2023; Hitsuwari et al., 2023).
Such increasing application of AI systems in consequential decision-making tasks appears to rest on the premise that they are inherently objective and rational (Ramezani et al., 2023; Ryberg, 2023, 2024). Depictions of AI entities in fiction and pop culture may have contributed to forming such associations, where robots and intelligent computer systems are portrayed as either devoid of human emotion or struggling to understand newly acquired emotions (Hermann, 2021; Prajapat and Singh, 2024). Indeed, an assumption of rationality is incorporated into the fundamental definition of intelligent computational systems, which states that an AI agent will select the action expected to maximize its performance measure, given specific information from the environment (Russel and Norvig, 2020). Such sustained depictions and associations built over a long period may be pervasive enough to have given rise to a lay belief or an implicit theory (Dweck, 1996) concerning the mind of an AI, particularly its decision-making aspects compared to those of a human. Notably, this definition of AI agents aligns with that of the rational human actor Homo Economicus, who, given perfect information and acknowledging constraints, is expected to act in a manner that maximizes his or her metric of focus, i.e., utility (Levitt and List, 2008; Rodriguez-Sickert, 2009; Simon, 1955). In the present paper, we first investigate whether a lay belief exists that an AI, compared to a human, is perceived as a rational decision-maker.
Importantly, our research finds that when interacting with AI agents, persistent lay beliefs about AI may cause individuals to adopt a more rational decision-making style, akin to that of the rational economic actor, which relies on reason rather than feelings. A recent body of literature has attempted to understand how beliefs about AI influence human acceptance of computational systems. For example, strong beliefs about the comparatively superior intelligence of computer systems lead individuals to more readily adopt advice from AI agents (von Walter et al., 2021); skepticism about a computer system’s ability to learn leads to avoidance of advice from AI agents (Reich et al., 2022); and a belief that computer systems lack autonomy leads to reduced appreciation of work performed by a robot or AI agent (Huang and Chen, 2019). Considering the increasing interaction between humans and AI systems and their use in consequential decision-making contexts, the present paper explores whether an often-speculated, but not yet empirically demonstrated, belief about AI as a rational decision-maker might be associated with differences in the decision-making styles individuals adopt in response (e.g., becoming more reliant on reason or more on feelings; Hong and Chang, 2015). Using an incentive-compatible economic game paradigm, we observe that individuals demonstrate decision-making that can be interpreted as reflective of adopting a more economically rational and reason-driven style when interacting with an AI agent compared to another human, driven by their belief that AI itself is a reason-driven decision-maker. This finding may have implications for the expansion of AI-based decision aids and advisors in a wide variety of business, legal, and policymaking contexts.
2 Background and conceptual development
People construe lay beliefs about everyday objects and entities. These are implicit or naïve theories formed to establish causality and make sense of the world (Dweck, 1996). Lay beliefs can arise from direct learning experiences or through secondary learning via observations, cultural exchanges of knowledge, or media, and may significantly affect one’s perceptions and decision-making (Plaks et al., 2009). Lay beliefs can pertain to a wide variety of topics and can range in specificity of focus, such as beliefs about oneself and one’s abilities (Dweck, 1996), the world and its consequences for people (Furnham, 2003), or specific outcomes of specific actions (McFerran and Mukhopadhyay, 2013); thus, they can affect judgments regarding the belief’s foci.
Notably, past depictions of AI systems in media have portrayed AI as lacking emotions but highly rational and capable of complex logic. Moreover, AI systems are being used to aid in computations based on formal logic. Hence, people may have formed lay beliefs about AI, particularly regarding its ability to engage in rational thinking and its lack of understanding of emotions, which influences perceptions of AI systems as decision-making agents. While a large body of research has focused on people’s adoption (Logg et al., 2019) or rejection (Dietvorst et al., 2015) of AI, much of this work has assessed AI as a good or service evaluated by consumers. Research that directly aims to understand people’s lay beliefs about AI as a decision-making agent is limited. For example, von Walter et al. (2021) demonstrate that people possess a lay belief that AI is more intelligent than humans. Note that intelligence and rational thought are considered conceptually and empirically separable (Stanovich et al., 2012); interestingly, each is said to enhance the propensity of the other to manifest in a causal context (Lopes and Oden, 1991; Stanovich et al., 2011). Related work also finds that people believe algorithms are incapable of learning from mistakes (Reich et al., 2022) and that robots (an AI-adjacent entity) can lack autonomy—and are less appreciated even when such robots engage in laudable work like disaster rescue (Huang and Chen, 2019). It is important to note that components of these beliefs, such as intelligence, learning, and autonomy, are traits typically ascribed to human individuals, particularly concerning their cognitively oriented or reason-driven decision-making.
In comparison, emotions and the ability to comprehend them are important factors in human decision-making (Pham, 1998). However, these attributes are not commonly ascribed to AI. Past research notes that people believe AI is incapable of experiencing emotions or performing tasks reliant on emotions (Castelo et al., 2019). People have a biased expectation for AI to succeed in objective (vs. subjective) tasks (Castelo et al., 2019) and are skeptical of AI recommendations in hedonic (vs. utilitarian) domains (Longoni and Cian, 2020), which are characterized by emotional and sensory aspects (Khan et al., 2005). These observations related to decision-making converge with growing research on how people understand the “mind” of the machine (Clegg et al., 2023) to make sense of the black box they perceive as driving AI decision-making. Therefore, it is important to understand if people have lay beliefs about the decision-making style of AI in general.
Literature on decision-making considers individuals to have two distinct orientations or styles of decision-making. In cognitive and social psychology, numerous dual-process models (e.g., System 1 vs. System 2) have been proposed (Smith and DeCoster, 2000; Evans, 2008), including the heuristic-systematic processing model (Chaiken, 1980; Chen and Chaiken, 1999), the central-peripheral processing model (Petty and Cacioppo, 1986), holistic-analytic processing (Nisbett et al., 2001), and the feeling-reason based decision-making model (Hong and Chang, 2015). Although differing in emphasis, these theories generally posit two qualitatively distinct modes of processing—often characterized as relatively “fast, automatic, and unconscious” versus “slow, deliberative, and conscious” (Evans, 2008). Such distinctions have been shown to predict a wide range of outcomes, including persuasion (e.g., Petty and Cacioppo, 1986), person perception (e.g., Neuberg and Fiske, 1987), and problem solving and reasoning (e.g., Sloman, 1996; Logan, 1988).
In this work, we focus on the feeling-reason based decision-making model (Hong and Chang, 2015) as it most closely reflects the varying responses of individuals to agents that may exhibit rational or intuitive reasoning styles. Prior research suggests, for example, that people tend to view their own decisions as more rational than those of others (VanBergen et al., 2022), while perceiving AI agents as having fewer emotional attributes relative to humans (Gray and Wegner, 2012). Importantly, evidence indicates that people recognize the distinction between feeling-based and reason-based decision processes and form beliefs about the extent to which others predominantly rely on one style over the other (Hong and Chang, 2015). For instance, individuals tend to believe that decisions made by younger (vs. older) individuals are more feeling-driven than reason-driven (Vijayakumar et al., 2023).
Given appraisals of AI as an entity with traits similar to those of humans and the noted propensity to understand and speculate about this “mind” of the machine, it is possible that people may hold a lay belief about the decision-making style of AI. We propose that people may have formed a lay belief that AI is relatively more reliant on reason (vs. feelings) than humans in decision-making. Formally,
H1: Individuals will differ in their lay beliefs about the decision-making style of an AI (vs. human) agent, such that an AI is perceived as more reason (vs. feeling) driven than a human agent.
As an extension of H1, when people cooperate or compete with an AI (vs. human) entity, they may adopt a comparatively more economically rational (affective) decision-making style. In other words, people could adjust their decision-making style to align with their assumptions about the other agent. Formally:
H2: When competing against an AI (vs. human) agent, individuals may demonstrate behaviors reflecting a comparatively more reason (vs. feeling) driven decision-making style. This effect will be driven by differing lay beliefs about the decision-making style of the AI (vs. human) agent.
3 Materials and methods
We report three pre-registered studies and their results. The first two studies, 1A and 1B, provide support for H1. These studies employ different scales and measures, offering convergent support. Study 2 explores the downstream consequences of this lay belief on an individual’s own decision-making style when faced with an AI or another human. Using an established economic ultimatum game paradigm, Study 2 demonstrates that individuals make decisions that may reflect a greater reliance on reason in response to an AI (compared to a human) decision-maker, offering support for H2.
Sample sizes for each study were determined before data collection based on power analyses, using G*Power (Faul et al., 2007), assuming = 0.05 and power = 0.80. Results from power analyses based on findings from past research (Hong and Chang, 2015; Garvey et al., 2023) suggested total sample sizes of 75 for Study 1A, 100 for Study 1B, and 186 for Study 2. Regardless, as the experiments were conducted online, we aimed for a minimum of 50 per cell in Studies 1A and 1B, in line with suggested best practices for online studies (Simonsohn et al., 2014), and 300 per cell in Study 2, as it measured a behavioral decision that included consequential monetary incentives—thus balancing best practices with economic considerations for the researchers when conducting such a study. Sensitivity analyses for the final sample sizes collected are also presented in detail for each study, performed using the G*Power tool, in the sections where the study procedures are discussed.
All sample sizes, experimental procedures (including measures and manipulations), and data exclusion criteria were pre-registered before data collection for each of the three studies and are reported. Links to pre-registrations have been provided in the description of each study, and any deviations are also reported in the interest of full transparency. Informed consent was obtained from participants, and all procedures, actions, and analyses were performed in compliance with relevant laws and institutional norms, as per the IRB approval (Date: 09 January 2024, reference number: HS-LR-DC-232-190-DeFranza) under which the research was conducted. Relevant files (data, syntax, outputs, and preregistration PDFs) can be accessed on the OSF repository using this link: https://osf.io/x8q4b/?view_only=86ce86afe2cd42dba39638183698501d.
3.1 Studies 1A and 1B
3.1.1 Study 1A: method and procedure
This study1 examines whether individuals endorse distinct lay beliefs about the decision-making styles of AI and human decision-makers, using a single-factor 2 (decision-maker: AI vs. human) between-participants design. Participants (N = 100, 47% described themselves as female, 53% as male, and 0% as other; Mage = 35.10) recruited from Prolific completed the study for monetary compensation. Sensitivity analysis revealed that this sample size provided 80% power to detect an effect size of f = 0.283 or greater for comparing means between two groups using an ANOVA and an effect size of d = 0.356 or greater for one-sample t-tests (for two sub-groups of sizes n = 50), with a 5% false-positive rate.
Randomly assigned participants were asked to indicate their lay belief about the decision-making style of a decision-maker on a nine-point bipolar scale (1 = “feeling,” 9 = “reason”): “I think that a typical person (AI) will make decisions largely based on ____”, which was modified and adapted from previous research (Hong and Chang, 2015). Finally, participants indicated their age and gender. Note that the word “typical” was used in the construction of the measure items for two strategic reasons: to gauge participants’ beliefs about the two category exemplars “AI” or “Humans” based on their own organic understanding, and to avoid interference from prior experience or knowledge they may have regarding specific sub-categories of the exemplars, which may bias their responses (such as emotive AI or anthropomorphized AI, which may bias beliefs toward feeling, or a nuclear physicist vs. a caregiver, which may bias beliefs toward reason vs. feeling, respectively).
3.1.2 Study 1A: results
Results from one-sample t-tests showed that participants tended to consider an AI decision-maker as relatively reliant on reason, with ratings significantly above the scale midpoint [i.e., 5; M = 8.35, SD = 0.86; t (48) = 27.397, p < 0.001, Cohen’s d = 0.855]. In contrast, participants tended to view a human decision-maker as relatively reliant on feeling, with ratings significantly below the scale midpoint [M = 4.33, SD = 1.48; t (50) = −3.22, p = 0.001, Cohen’s d = 1.479]. The difference in lay beliefs about the two decision-makers was significant [F (1, 98) = 273.14, p < 0.001, = 0.73].
This preliminary evidence suggests people hold a lay belief that AI decision-makers rely more on reason compared to human decision-makers. While this result supports H1, it is limited in that it may have forced participants to rate decision-making style using a semantic-differential scale, introducing a demand effect. Also, while people do rely on feelings or reason differentially, they may engage in both simultaneously as well. The next study addresses these concerns directly.
3.1.3 Study 1B: method and procedure
One may argue that the results observed in Study 1A could be a function of the measure used. In other words, the structure of the scale makes it difficult for participants to express the belief that human (AI) decision-makers use both reason and feeling simultaneously. The present study addresses this by utilizing a measure that accounts for the coexisting but differential reliance on reason and feeling, a method adapted from past literature (Hong and Chang, 2015).
Participants (N = 101, 47.5% described themselves as female, 50.5% as male, and 2% as other—including nonbinary; Mage = 36.31) recruited from Prolific completed the survey for monetary compensation. Sensitivity analysis revealed that this sample size provided 80% power to detect an effect size of dz = 0.249 or greater for comparing means between two groups using a paired t-test, and an effect size of d = 0.249 or greater for one-sample t-tests, with a 5% false-positive rate.
This pre-registered study2 employed a 2 (decision-maker: AI vs. human) × 2 (decision-making style: reliance on reason vs. feeling) within-participants design. All participants were asked to indicate their opinions on the decision-making styles of a typical person and an AI, in randomized order, using a nine-point scale (1 = “strongly disagree,” 9 = “strongly agree”), adapted from previous research (Hong and Chang, 2015; Vijayakumar et al., 2023): (1) “A typical [AI/person] makes decisions largely based on the functionality of the options and rational arguments” and (2) “A typical [AI/person] makes decisions largely based on feelings regarding the options and emotional arguments”. We then calculated an index of decision-making style for each decision-maker by subtracting the first measure item from the second (i.e., the reason measure from the feeling item), such that higher values of the index reflect greater reliance on feeling compared to reason in decision-making3 (Hong and Chang, 2015; Vijayakumar et al., 2023). Finally, participant age and gender details were collected.
3.1.4 Study 1B: results
Results from one-sample t-tests showed that participants tended to consider an AI decision-maker as more reliant on reason than feeling, as reflected in an index mean significantly below zero [M = −5.78, SD = 2.16; t (100) = −26.884, p < 0.001, Cohen’s d = 2.161]. On the other hand, participants tended to view a human decision-maker as more reliant on feeling than reason, with the index mean significantly above zero [M = 1.68, SD = 2.42; t (100) = 6.989, p < 0.001, Cohen’s d = 2.42]. This difference in lay beliefs about the two decision-makers was significant [paired sample t-test: t (100) = 22.427, p < 0.001; Cohen’s d = 3.345].
Taken together, studies 1A and 1B support the idea that people may hold a lay belief about the decision-making style of an AI (vs. human), particularly viewing it as more reliant on reason rather than feeling. In the next study, we further explore how this lay belief can influence subsequent decision-making. We suggest that people may rely comparatively more on reason in response to a decision made by an AI.
3.2 Study 2
Given evidence that a lay belief about the nature of AI decision-making may exist, it is worthwhile to explore whether this belief impacts individuals’ attitudes and decisions when interacting with AI. To do so, we utilized an incentive-compatible single-shot economic ultimatum game paradigm. The ultimatum game is a useful context for evaluating decisions that can be interpreted as rationality in decision-making in the face of unexpected allocations and payouts (Harsanyi, 1961; Thaler, 1988; Morewedge, 2009). One important feature of the single-shot game is that it offers a closed-form Nash equilibrium: a game-theoretic prediction of decision-making that ensures mutually optimal outcomes for all players (Güth et al., 1982; Suleiman, 1996; Nowak, 2000; Garvey et al., 2023). This feature allows researchers to compare game results to a theoretically defined baseline of economic rationality. Additionally, past research has demonstrated that the paradigm is particularly effective in providing a controllable environment for decision-making between human players and their counterparts, both human and non-human (Sanfey et al., 2003; Sanfey, 2007; Garvey et al., 2023). By using the single-shot game, we also deviate from recent research in Human-Computer Interaction that has utilized the economic ultimatum game paradigm through multi-round scenarios to investigate how people may train an AI toward socially (though not necessarily economically) normative behavior (such as Treiman et al., 2025). Arguably, a single-shot game (compared to a multi-round game) does not offer a chance to train the opponent (either AI or human) but instead can rather serve as an assessment of an individual’s decision based on their beliefs about the decision-making of the opponent and their own reaction to the opponent’s offer. Hence, using a single-shot paradigm, we aimed to avoid potential confounds arising from a desire to train or otherwise influence the opponent’s behavior over time.
It is important to note that there is a well-documented tendency for human players to violate the expectations of economic rationality enforced by the game (Oosterbeek et al., 2004; Larney et al., 2019). Instead of pursuing the utility-maximizing strategy of accepting a given offer, which reflects behavior driven by economic rationality, human participants have demonstrated an expectation of equity, or social normativity, rejecting offers that do not approach a 50–50 split (Suleiman, 1996; Nowak, 2000; Larney et al., 2019)—a behavior that can be interpreted as being driven more by affective and emotional factors. Thus, the belief in AI acting as a rational decision-maker might influence human players’ acceptance of non-equitable but theoretically optimal proposals, which can result in outcomes reflective of economically rational behavior—a consequence of reason-driven decision-making, as opposed to feelings. In other words, players may be encouraged to assess the specific constraints of the game in relation to their expected return and focus less on relevant social norms or competitive context (Rodriguez-Sickert, 2009; Simon, 1955), leading to more economically rational (utility-maximizing) outcomes (Gigerenzer, 2018; Levitt and List, 2008). Taken together, these features make the game a suitable paradigm for testing H2.
3.2.1 Study 2: method and procedure
Participants (N = 630, 49.2% described themselves as female, 49% as male, and 1.7% as other—including non-binary and transgender; Mage = 42.03) recruited from Prolific completed the survey for monetary compensation. Sensitivity analysis revealed that this sample size provided 80% power to detect an effect size of f = 0.112 or greater for comparing means between two groups using an ANOVA, and a critical 𝜒2 = 3.841 or greater for chi-square analysis with Df = 1, and an effect size of d = 0.140 or greater for one-sample t-tests (for two sub-groups of sizes n = 315), with a 5% false-positive rate. This pre-registered4 study used a single-factor 2 (proposer: AI vs. human) between-participants design.
All participants were informed that the study would consist of a single round in which they would be matched randomly with another player. One would be designated the proposer, who would begin the game by dividing $1; the other would be designated the responder, asked to accept or reject the proposed division. A realizable economic incentive was also introduced: the outcome of their decision would be reflected in the final compensation as a bonus.
Depending on their randomly assigned condition, participants were told the other player was either another human participant or a recently developed AI. Unbeknownst to them, all participants were assigned the role of responder and waited for the proposer to begin the game by suggesting a division of the $1.00 bonus. We deemed the $1 amount sufficient as a nominal bonus of interest, based on past research indicating that $1 is impactful for eliciting consequential behavioral responses from experimental participants recruited from online participant pools, where payment for participation is typically a fraction of $1 for short-duration studies like our current experiment (Wang and Murnighan, 2017). Upon receiving the proposal, participants had a decision to make: they could choose to either accept or reject the proposer’s offer. If they accepted the proposed division, each player would earn their share of the divided bonus. On the other hand, if they rejected, both would earn $0. After being informed of this rule, all participants were shown the offer from the proposer (Figure 1). The ratio of $0.10 to $0.90 was specifically adopted, as a 1:9 ratio is considered an “unfair” split in operationalizations of economic ultimatum games in past literature (Sanfey et al., 2003).
Figure 1
Note that despite the uneven split favoring the proposer, accepting the offer is the best economic outcome for the responder. Therefore, the choice to accept can be interpreted as a decision primarily based on reason, as accepting the offer earns $0.10 while rejecting grants nothing. Hence, participants’ decision to accept or reject the given offer was our main dependent variable (1 = accept, 0 = reject), serving as a proxy for assessing the adoption of a more reasoned (particularly economically rational) decision-making style compared to a feeling-driven approach. Once again, please note that in the context of compensations offered for participation through contemporary online participant pool platforms such as Prolific, $0.10 is a sizable bonus relative to compensation offered for short-duration studies like this, making it a consequential amount for consideration.
All participants were subsequently presented with two questions to assess their beliefs about the decision-making style of the proposer (Hong and Chang, 2015; Vijayakumar et al., 2023) using a nine-point scale (1 = “not at all”, 9 = “to a large extent”): (1) “When considering whether to accept or reject the offer by the other player, to what extent did you think the other player made decisions largely based on functionality of the options and rational arguments?” and (2) “When considering whether to accept or reject the offer by the other player, to what extent did you think the other player made decisions largely based on feeling regarding the options and emotional arguments?” An index of decision-making style was computed as in Study 1B, such that higher values reflect a greater reliance on feeling as compared to reason in decision-making. Participants’ age and gender details were collected at the end.
3.2.2 Study 2: results
Responses from four participants who indicated having language-related or technical difficulties were removed (as per pre-registered criteria), leaving responses from 626 participants (48.9% described themselves as female, 49.4% as male, and 1.8% as other—including nonbinary and transgender; Mage = 42.03) for analysis.
Choice: first, the choice to accept the offer (1 = accept, 0 = reject) was regressed on proposer type (human, AI) using a logistic regression, which revealed a significant effect [b = −0.480, SE = 0.163, 𝜒2 (1) = 8.78, p = 0.003], indicating that participants were more likely to accept the offer when it was made by an AI (49.2% or 153/311 responses) compared to a human (37.5% or 118/315 responses) proposer (of the overall total responses N = 626).
Lay belief: results from one-sample t-tests showed that participants tended to view an AI decision-maker as significantly more reliant on reason than feeling, as reflected by the index mean being significantly below zero [M = −1.27, SD = 4.52; t (310) = −4.967, p < 0.001, Cohen’s d = 4.521]. Participants viewed a human decision-maker as relatively more reliant on feeling than reason, with the index mean significantly above zero [M = 1.48, SD = 4.19; t (314) = 6.286, p < 0.001, Cohen’s d = 4.186]. The decision-making style index was significantly different for the two proposers [F (1, 624) = 62.647, p < 0.001, = 0.091].
Mediation analysis: we employed PROCESS MACRO Model 4 on SPSS (Hayes, 2013) to test for a mediation model of the form X➔Z➔Y. We would like to note that there may be other alternatives or mediation models available for testing the same, which could be equally viable or even surpass the model we have used. We decided to use the Model 4 method here for convenience and to align with past research that has opted for this model when assessing the effect of a lay belief about the decision-making style of another entity (Vijayakumar et al., 2023). In this model, the experimental condition to which the participant was randomly assigned (proposer: AI, coded as “+1,” vs. Human, coded as “−1”) was entered as the independent variable (X as specified in the model), serving as the starting point of the experiment. The participant’s indicated decision to accept the offer (coded as: accept = “1,” and reject = “0”) was entered as the variable capturing the final outcome of interest (Y in the specified mediation model), as it was considered to represent the outcome of interest according to the typical stimulus—application of belief—consequent judgment or decision framework for how beliefs and attitudes can affect subsequent decisions (i.e., the Theory of Reasoned Action; Bagozzi, 1986).
The participant’s beliefs about the proposer’s decision-making style, assessed via the calculated comparative index, were included as the mediating variable (Z in the model), as existing beliefs can influence the final decision or judgment and may drive the decision to accept or reject. Theoretically, the decision to accept or reject the proposed offer can reflect the strategy adopted in the ultimatum game (Sanfey et al., 2003; Sanfey, 2007; Garvey et al., 2023), which, in turn, can be an outcome of beliefs about the decision-making style of the proposer at the time of making the proposal. Hence, reversing the two variables specified as Z and Y in the current model may not be reasonable. However, we acknowledge that this is a shortcoming of the mediation model analyses as discussed by Fiedler et al. (2018), and this limitation is discussed further in the later part of this manuscript.
Results of the mediation analysis: conditional on the model assumption X ➔ Z ➔Y as specified in the previous section, our statistical test shows that Z (Participants’ beliefs about the proposer’s decision-making style) can account for a significant portion of the variance. The analysis specified as X ➔Z ➔Y using a bootstrapping method [PROCESS MACRO Model 4, 95% Confidence Interval (CI), 5,000 resamples; Hayes, 2013] with the proposer as X, the decision-making style index as Z, and the decision to accept the offer as Y, showed pairwise correlations between the proposer and the perceived decision-making style of the proposer, and between the perceived decision-making style of the proposer and the participant’s decision to accept the offer, accounting for the full relationship between the X and Y variables in the specified mediation model (B = 0.1665, SE = 0.0354, 95% CI: [0.1057, 0.2433]), as illustrated in Figure 2.
Figure 2
These results are consistent with the proposed mediation model involving the nature of the proposer (Human or AI), perceptions concerning the proposer’s decision-making style, and the decision to accept (vs. reject) the proposer’s offer. Participants showed greater acceptance of the offer in the ultimatum game when the proposer was an AI, a behavior consistent with approaching the theoretical optimal outcome (Güth et al., 1982; Suleiman, 1996), and thus interpretable as economically rational decision-making. When the proposer was presented as another human, participants forewent their possible earnings, a pattern that replicates findings from prior research (Larney et al., 2019).
4 Discussion
The current research proposes that people may have differing lay beliefs about the decision-making of AI and human decision-makers. People believe AI is more reason-driven, which contrasts with their beliefs about humans. Furthermore, due to this belief, when people compete against an AI, they may adopt a comparatively more reason-driven decision-making style. Two studies validate the lay belief about the decision-making style of AI agents. A third study shows that the lay belief can influence an individual’s decision-making style when interacting with an AI versus a human agent in a single-shot ultimatum game with real monetary stakes.
4.1 Limitations and future directions
There are some limitations to the current research that must be acknowledged.
First, while current research provides evidence that supports our argument that people may adopt a more reason-driven (vs. feeling-driven) decision-making style when they believe they are interacting with an AI (vs. a human; Study 2)—which is itself believed to be more reason-driven (vs. feeling-driven) in its decision-making—current research does not clarify whether the observed phenomenon is specific to AI or if it is due to the contextualized application of social matching. Participants may have demonstrated a general tendency to take a more thoughtful or calculative approach, expecting the counterpart to be more rational. Future research may clarify this by contrasting individuals’ behavior in a similar one-shot ultimatum game context, comparing an AI counterpart to another human participant who is specifically described with labels such as “highly intelligent” or “devoid of emotional ability.” Doing so will further elucidate whether the behavior observed in Study 2, which we interpret to be economically rational (i.e., accepting the socially unfair but utility-maximizing offer in the one-shot economic ultimatum game), is due to a general tendency to take a calculated approach to accommodate a counterpart who is expected to be more rational, or if it stems from a yet unexplored characteristic that is solely specific to the AI entity, regardless of its perceived decision-making style. For instance, alternative explanations, such as perceptions of reduced moral agency in AI or weakened expectations for AI to engage in norm-following behavior (assuming it is considered normative to make fair offers), could also explain the observed phenomenon. Although the mediation model in Study 2 supports the idea that the phenomenon is driven by beliefs about the decision-making style (as more reason-driven or feeling-driven) of the proposer, it does not rule out these alternative explanations. Future research can focus on testing these pathways and investigating whether the phenomenon observed is an instance of a broader attribution and matching process based on lay beliefs, as we argue, or if qualitatively distinct AI-specific phenomena also have an influence.
Second, while we specifically used the phrase “typical AI/Human” in constructing our measure items to assess the focal lay belief tested, in order to avoid biased responses from any possible past experiences with specialized subcategories of the two focal entities, it is still a valid concern that people’s understanding of AI may not be uniform as we have assumed. While our measure items capture general beliefs broadly across the categories of AI and Humans, it is still possible that individuals may also hold nuanced views of AI systems (Ada Lovelace Institute and The Alan Turing Institute, 2023). While recent observations from surveys suggest that a large majority of people (approximately 75%) have used at least one AI-driven service in the past (Balaji et al., 2024), which supports our assumption to expect a categorical exemplar when participants were responding to the measure items assessing their lay beliefs, there is still a possibility that the word “typical” may have been understood differently by participants. We urge readers to consider the nuanced use of the phrase in relation to our focal research and procedures employed, and not mistake it as an intention to tar a broad spectrum of AI with the same brush—particularly given the rate of technological improvements in the field.
Third, in our research, we have interpreted the decision to accept the “unfair” offer in the one-shot economic ultimatum game as one that is economically rational or reflective of reason-driven decision-making. We consider it so because such a decision reaches the game-theoretic optimum and produces a tangible benefit to the participant, regardless of whether this outcome is perceived to be a fair one. However, alternative interpretations are possible for the decision to reject the offer. While rejecting the offer is not the economically rational option, it could also stem from a desire for fairness enforcement or to ensure norm compliance by the counterpart, which are independent of (reason vs. feeling driven) decision-making propensity (Gigerenzer, 2018; Levitt and List, 2008; Simon, 1955). Many multi-shot ultimatum games have indeed observed behavior and motives reflective of the latter interpretation (such as Treiman et al., 2025, where participants attempted to train AI to behave as such over many rounds). Since such motives were not measured in the current research (Study 2), it is not possible to rule out these alternative interpretations of participants’ decisions to reject the offer and consider them solely as reflective of emotionally driven and non-reason-oriented decision-making. Future research can investigate the nuances of rejection in the one-shot paradigm more thoroughly.
It is also important to clarify that our results are specific to a bargaining context where rationality aligns with acceptance. Our finding that humans act more in accordance with the Nash equilibrium when facing an AI suggests that they view AI agents as instrumental, reason-based actors. In cooperative social dilemmas (e.g., a Prisoner’s Dilemma), however, this same mechanism might yield different results. If humans expect AI agents to act as rational maximizers, they may anticipate defection rather than cooperation, potentially leading to lower levels of trust compared to human-human interaction. Future research should investigate whether the lay belief we observed facilitates coordination in bargaining but hinders cooperation in social dilemmas.
Moreover, while observing the pattern of results in Study 2, it is notable that although acceptance rates increased in the AI condition, rejection of unfair offers remained frequent. One plausible explanation for this observation could be that rejection in the Ultimatum Game may be driven by two distinct forces: the desire to punish perceived hostile intentions (negative reciprocity) and the distaste for unequal outcomes (inequity aversion; Fehr et al., 2006; Falk et al., 2008). While the AI identity can likely mitigate the former, it does not eliminate the latter. Participants motivated by pure inequity aversion can reject low offers regardless of the proposer’s identity to avoid a disadvantageous split. Furthermore, individual differences in anthropomorphism (Nass et al., 1994) mean that some participants may still project social agency onto the AI, applying fairness norms rigidly despite the non-human nature of the counterpart. Future research might explore the unique moderation provided by these competing mechanisms.
Lastly, while the mediation model (Study 2) supports our conceptualization, there are some limitations arising from the study’s operationalization and interpretation. As the mediator and outcome are measured within the same experimental session, post-hoc rationalization may have played a role. One could argue that participants inferred beliefs about the proposer’s decision-making style based on their own choice to accept or reject the offer. Although we have provided theoretical justification for the mediation model, due to how the experiment was operationalized, such an argument cannot be completely ruled out. Future research can address this by replicating the experimental design as a two-step experiment instead of how it was operationalized in the current research. Participants’ lay belief can be measured at time T1, and the one-shot economic ultimatum game can be conducted at time T2 (possibly a few days later) to assess the final outcome. Such an operationalization will avoid the possibility of post-hoc rationalization.
To conclude, the limitations of the studies in this paper must also be considered when interpreting their observations. It is important to note that the current research offers only one possible explanation for the phenomenon observed in participants’ behavior toward the AI/human encountered. It does not rule out alternative interpretations related to the basis of decision-making itself or factors beyond the focal decision-making paradigm explored. For instance, behavior and decision-making are often studied under dual-process theory (refer to Kahneman, 2003), which contrasts a quick and intuitive decision-making style (often referred to as system 1 or type 1 decision-making) with a slow and deliberative decision-making style (system 2 or type 2 decision-making). While there could be parallels with the reason vs. feeling reliance in decision-making considered in our theorization, based on our understanding of the primacy of feelings in judgment (Pham et al., 2001) and the role of affect in decision-making when contrasted with deliberate decision-making involving reason (Pham, 1998; Pham, 2007), it still requires empirical scrutiny. Similarly, while the lay belief about reason vs. feeling-driven decision-making propensity was observed to influence the behavior of interest in our research (Study 2), we cannot rule out possible alternative explanations. For instance, we have not tested for, or ruled out, social factors such as social morality and sense of agency (Salatino et al., 2025), social expectations when perceiving AI as a social being similar to other humans (Zhou et al., 2024), demands of normative behavior and conformity (Mutzner et al., 2026), agreeableness, and expectations of reciprocity in behavior (Upadhyaya and Galizzi, 2023), or the potential to complement and match the behaviors of AI agents when working in human-AI teams (Flathmann et al., 2024), all of which have been shown to affect the decisions and behaviors of individuals when interacting with AI agents in a variety of contexts. While it would be possible to theoretically speculate that these social factors may equally lead to similar outcomes as the focal lay belief concerning decision-making probed in this research, future studies need to empirically test the similarity or dissimilarity of these various factors in affecting behavior, as observed in the current research. As AI technology becomes more ubiquitous and human–AI interactions become more frequent, it is imperative to understand how these differing beliefs or expectations concerning AI (compared to human counterparts) can influence relational dynamics between AI and humans in both one-on-one and group situations, as proposed by Reinecke et al. (2025) and Lange et al. (2025).
4.2 Contribution and implications
The current research enhances our understanding of how laypeople may appraise AI as a decision-making entity, extending the literature on perceptions of the machine’s mind (Clegg et al., 2023) underlying AI and its decision-making process. In addition to contributing to the growing literature on lay beliefs and consequences (Dweck, 1996; McFerran and Mukhopadhyay, 2013), particularly concerning AI as an agentic entity (von Walter et al., 2021), the current research also reveals an important consequential outcome: the adoption of belief-informed decision-making styles. Notably, a more reason-reliant decision-making style (Hong and Chang, 2015), rather than a feelings-reliant one, was observed. As AI agents become more common, increased interactions with them may promote more economically rational decision-making through mere exposure. Firms adopting AI should consider that customers or suppliers who hold the lay beliefs highlighted in this research may use a more reason-oriented decision-making style when interacting with AI agents. For example, exposure to an AI counterpart may encourage human decision-makers with such beliefs to focus more on costs and benefits (rational aspects) in negotiations. However, in emotionally charged situations or situations requiring an understanding of emotional details, decision-makers might prefer to rely more on emotions, leading to a perceived preference against the firm’s AI agent in favor of a human agent—especially if they hold lay beliefs about AI (vs. human) decision-making, as demonstrated in this research. In such cases, it may be beneficial for the firm to use an AI with advanced emotional capabilities that enable emotive conversations (Huang and Rust, 2024) and to communicate effectively about the AI’s emotional capacities.
As AI becomes more involved in policymaking, regulatory bodies are concerned about its impact (UNESCO, 2024). Our findings suggest that under certain conditions, people with a strong lay belief that AI is more reason (vs. feeling) driven in its decision-making may accept AI involvement in situations requiring careful reasoning but may resist its involvement when emotional considerations are required. Further empirical research is needed to better understand the contextual or situational factors that may interact with lay beliefs regarding AI decision-making, which may warrant its inclusion or exclusion in policymaking accordingly. Our findings may also help clarify task-or domain-specific AI aversion (Dietvorst et al., 2015) or appreciation (Logg et al., 2019), and future researchers may explore how lay beliefs about AI decision-making can influence its acceptance across various domains of cognitive and emotionally subjective focus in problem-solving.
The current research also extends ongoing work on laypeople’s understanding of AI, which has implications for how people evaluate AI recommendations. This is particularly timely because recent research (Tully et al., 2025) shows that AI literacy (i.e., objective knowledge about AI) and AI receptivity (i.e., a tendency toward AI usage) are negatively correlated. The authors suggest that individuals with lower objective knowledge about what AI is and how it operates may rely more on AI for decision-making, in part because they perceive AI to be magical and experience feelings of awe. In other words, the lay belief about AI’s rational decision-making style may not be grounded in objective understanding, and such knowledge gaps and mystified perceptions could increase acceptance of AI recommendations even when those recommendations are not necessarily trustworthy or unbiased. Indeed, AI systems are often questioned and criticized for reductionistic and biased decision-making processes (Newman et al., 2020), underscoring the need for greater public—and policymaker—education and information. In addition, emerging evidence suggests that AI use itself may systematically shape decision-making—for example, by increasing the likelihood of immoral behavior (Gill, 2020) and reducing prosocial orientation (Granulo et al., 2024). Accordingly, we argue that decision-makers should be aware of the potential existence of lay beliefs about AI’s decision-making style and consider how such beliefs may influence the acceptance of AI recommendations and subsequent human decisions.
Statements
Data availability statement
The datasets presented in this study are available in an online repository: Relevant files, including the data, syntax, outputs, and preregistration PDFs, can be accessed via the OSF repository at: https://osf.io/x8q4b/?view_only=86ce86afe2cd42dba39638183698501d.
Ethics statement
The studies involving humans were approved by the Human Research Ethics Committee - Humanities (HREC-HS), Office of Research Ethics, University College Dublin. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.
Author contributions
SV: Conceptualization, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing. WYY: Conceptualization, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing. DD: Conceptualization, Investigation, Methodology, Writing – original draft, Writing – review & editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
1.^Pre-registered: https://aspredicted.org/VNX_G2P.
2.^https://aspredicted.org/2XP_CBJ
3.^This deviated from the preregistered calculation of the index, which was originally specified as the reason measure minus the feelings measure. In the present study, we reversed the subtraction to align more closely with Hong and Chang’s original approach. Note that this change only affects the sign of the resulting index and does not alter the results or the pattern of findings reported in any way.
References
1
Ada Lovelace Institute and The Alan Turing Institute (2023). How do people feel about AI? A nationally representative survey of public attitudes to artificial intelligence in Britain. Available online at: https://adalovelaceinstitute.org/report/public-attitudes-ai (Accessed March 10, 2026).
2
BagozziR. P. (1986). Attitude formation under the theory of reasoned action and a purposeful behaviour reformulation. Br. J. Soc. Psychol.25, 95–107. doi: 10.1111/j.2044-8309.1986.tb00708.x
3
BalajiN.BharadwajA.ApothekerJ.MooreM. (2024). Consumers know more about AI than business leaders think. Available online at: https://www.bcg.com/publications/2024/consumers-know-more-about-ai-than-businesses-think (Accessed March 10, 2026).
4
BigmanY. E.GrayK. (2018). People are averse to machines making moral decisions. Cognition181, 21–34. doi: 10.1016/j.cognition.2018.08.003,
5
CasteloN.BosM. W.LehmannD. R. (2019). Task-dependent algorithm aversion. J. Mark. Res.56, 809–825. doi: 10.1177/0022243719851788
6
ChaikenS. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. J. Pers. Soc. Psychol.39, 752–766. doi: 10.1037/0022-3514.39.5.752
7
ChenS.ChaikenS. (1999). “The heuristic-systematic model in its broader context,” in Dual-Process Theories in Social Psychology, eds. ChaikenS.TropeY. (New York, NY: Guilford), 73–96.
8
CleggM.HofstetterR.de Emanuel BellisSchmittB. H. (2023). Unveiling the mind of the machine. J. Consum. Res.51, 342–361. doi: 10.1093/jcr/ucad075
9
Deloitte AI Institute (2021). The government and public services AI dossier Deloitte. Available online at: https://www2.deloitte.com/us/en/pages/consulting/articles/ai-dossier-government-public-services.html
10
Deloitte AI Institute (2024). State of generative AI in the Enterprise. 2024 year-end generative AI report Deloitte United States. Available online at: https://www2.deloitte.com/us/en/pages/consulting/articles/state-of-generative-ai-in-enterprise.html?id=us:2el:3dc:4sogaidaii:eng:cons:012725
11
DietvorstB. J.BhartiS. (2020). People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychol. Sci.31, 1302–1314. doi: 10.1177/0956797620948841,
12
DietvorstB. J.SimmonsJ. P.MasseyC. (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen.144, 114–126. doi: 10.1037/xge0000033,
13
DweckC. S. (1996). “Implicit theories as organizers of goals and behavior,” in Implicit Theories as Organizers of Goals and Behavior, eds. GollwitzerP. M.BarghJ. A. (London: Guilford Press), 69–90.
14
EvansJ. S. B. T. (2008). Dual-processing accounts of reasoning, judgment, and social cognition. Annu. Rev. Psychol.59, 255–278. doi: 10.1146/annurev.psych.59.103006.093629,
15
FalkA.FehrE.FischbacherU. (2008). Testing theories of fairness—intentions matter. Games Econ. Behav.62, 287–303. doi: 10.1016/j.geb.2007.06.001
16
FaulF.ErdfelderE.LangA. G.BuchnerA. (2007). G*power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods39, 175–191. doi: 10.3758/bf03193146,
17
FehrE.NaefM.SchmidtK. M. (2006). Inequality aversion, efficiency, and maximin preferences in simple distribution experiments: comment. Am. Econ. Rev.96, 1912–1917. doi: 10.1257/aer.96.5.1912
18
FiedlerK.HarrisC.SchottM. (2018). Unwarranted inferences from statistical mediation tests – an analysis of articles published in 2015. J. Exp. Soc. Psychol.75, 95–102. doi: 10.1016/j.jesp.2017.11.008
19
FlathmannC.DuanW.McneeseN. J.HauptmanA.ZhangR. (2024). Empirically understanding the potential impacts and process of social influence in human-AI teams. Proc. ACM Hum. Comput. Int.8, 1–32. doi: 10.1145/3637326
20
FurnhamA. (2003). Belief in a just world: research progress over the past decade. Personal. Individ. Differ.34, 795–817. doi: 10.1016/s0191-8869(02)00072-7
21
GarveyA. M.KimT.DuhachekA. (2023). Bad news? Send an AI. Good news? Send a human. J. Mark.87, 10–25. doi: 10.1177/00222429211066972
22
GigerenzerG. (2018). The bias bias in behavioral economics. Rev. Behav. Econ.5, 303–336. doi: 10.1561/105.00000092
23
GillT. (2020). Blame it on the self-driving car: how autonomous vehicles can alter consumer morality. J. Consum. Res.47, 272–291. doi: 10.1093/jcr/ucaa018
24
GranuloA.CaprioliS.FuchsC.PuntoniS. (2024). Deployment of algorithms in management tasks reduces prosocial motivation. Comput. Hum. Behav.152:108094. doi: 10.1016/j.chb.2023.108094
25
GrayK.WegnerD. M. (2012). Feeling robots and human zombies: mind perception and the Uncanny Valley. Cognition125, 125–130. doi: 10.1016/j.cognition.2012.06.007
26
GüthW.SchmittbergerR.SchwarzeB. (1982). An experimental analysis of ultimatum bargaining. J. Econ. Behav. Organ.3, 367–388. doi: 10.1016/0167-2681(82)90011-7
27
HarsanyiJ. C. (1961). On the rationality postulates underlying the theory of cooperative games. J. Confl. Resolut.5, 179–196. doi: 10.1177/002200276100500205
28
HayesA. F. (2013). Introduction to Mediation, Moderation, and Conditional Process Analysis. London: Guilford Press.
29
HermannI. (2021). Artificial intelligence in fiction: between narratives and metaphors. AI Soc.38:299. doi: 10.1007/s00146-021-01299-6
30
HitsuwariJ.UedaY.YunW.NomuraM. (2023). Does human-AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry. Comput. Human Behav.139:107502. doi: 10.1016/j.chb.2022.107502
31
HongJ.ChangH. H. (2015). ‘I’ follow my heart and ‘we’ rely on reasons: the impact of self-construal on reliance on feelings versus reasons in decision making. J. Consum. Res.41, 1392–1411. doi: 10.1086/680082
32
HuangS.ChenF. (2019). “When robots come to our rescue: why professional service robots Aren’t inspiring and can demotivate consumers’ pro-social Behaviors,” in NA - Advances in Consumer Research, eds. BagchiR.BlockL.LeeL. (Duluth, MN: Association for Consumer Research), 93–98.
33
HuangM.RustR. T. (2024). EXPRESS: the caring machine: feeling AI for customer care. J. Mark.88, 1–23. doi: 10.1177/00222429231224748
34
JonssonO.de Luca TenaC. (2021). European tech insights (part II). IE Center for the Governance of Change. Available online at: https://docs.ie.edu/cgc/IE-CGC-European-Tech-Insights-2021-%28Part-II%29.pdf.
35
KahnemanD. (2003). Maps of bounded rationality: psychology for behavioral economics. Am. Econ. Rev.93, 1449–1475. doi: 10.1257/000282803322655392
36
KhanU.DharR.WertenbrochK. (2005). “A Behavioral decision theory perspective on hedonic and utilitarian choice,” in Inside Consumption: Frontiers of Research on Consumer Motives, Goals, & Desires, eds. RatneshwarS.MickD. G. (Abingdon-on-Thames, UK: Routledge), 144–165.
37
LangeB.KeelingG.ManziniA.McCroskeryA. (2025). We need accountability in human-AI agent relationships. NPJ Artif. Intell.1:38. doi: 10.1038/s44387-025-00041-7
38
LarneyA.RotellaA.BarclayP. (2019). Stake size effects in ultimatum game and dictator game offers: a meta-analysis. Organ. Behav. Hum. Decis. Process.151, 61–72. doi: 10.1016/j.obhdp.2019.01.002
39
LeungE.PaolacciG.PuntoniS. (2018). Man versus machine: resisting automation in identity-based consumer behavior. J. Mark. Res.55, 818–831. doi: 10.1177/0022243718818423
40
LevittS. D.ListJ. A. (2008). Homo economicus evolves. Science319, 909–910. doi: 10.1126/science.1153640,
41
LoganG. D. (1988). Toward an instance theory of automatization. Psychol. Rev.95, 492–527. doi: 10.1037/0033-295X.95.4.492
42
LoggJ. M.MinsonJ. A.MooreD. A. (2019). Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process.151, 90–103. doi: 10.1016/j.obhdp.2018.12.005
43
LongoniC.BonezziA.MorewedgeC. K. (2019). Resistance to medical artificial intelligence. J. Consum. Res.46, 629–650. doi: 10.1093/jcr/ucz013
44
LongoniC.CianL. (2020). Artificial intelligence in utilitarian vs. hedonic contexts: the ‘word-of-machine’ effect. J. Mark.86, 91–108. doi: 10.1177/0022242920957347
45
LopesL. L.OdenG. C. (1991). The rationality of intelligence. Probability and rationality. New York, NY: Brill, 199–223.
46
McFerranB.MukhopadhyayA. (2013). Lay theories of obesity predict actual body mass. Psychol. Sci.24, 1428–1436. doi: 10.1177/0956797612473121,
47
MorewedgeC. K. (2009). Negativity bias in attribution of external agency. J. Exp. Psychol. Gen.138, 535–545. doi: 10.1037/a0016796,
48
MutznerN.YasseriT.RauhutH. (2026). Normative equivalence in human-AI cooperation: behaviour, not identity, drives cooperation in mixed-agent groups. arXiv preprint arXiv:2601.20487. doi: 10.48550/arXiv.2601.20487
49
NassC.SteuerJ.TauberE. R. (1994). “Computers are social actors,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 72–78.
50
NeubergS.FiskeS. T. (1987). Motivational influences on impression formation: outcome dependency, accuracy-driven attention, and individuating processes. J. Pers. Soc. Psychol.53, 431–444. doi: 10.1037/0022-3514.53.3.431,
51
NewmanD. T.FastN. J.HarmanD. (2020). When eliminating bias isn’t fair: algorithmic reductionism and procedural justice in human resource decisions. Organ. Behav. Hum. Decis. Process.160, 149–167. doi: 10.1016/j.obhdp.2020.03.008
52
NisbettR. E.PengK.ChoiI.NorenzayanA. (2001). Culture and systems of thought: holistic vs. analytic cognition. Psychol. Rev.108, 291–310. doi: 10.1037/0033-295X.108.2.291
53
NowakM. A. (2000). Fairness versus reason in the ultimatum game. Science289, 1773–1775. doi: 10.1126/science.289.5485.1773,
54
OosterbeekH.SloofR.van de KuilenG. (2004). Cultural differences in ultimatum game experiments: evidence from a meta-analysis. Exp. Econ.7, 171–188. doi: 10.1023/b:exec.0000026978.14316.74.
55
PettyR. E.CacioppoJ. T. (1986). Attitudes and Persuasion: Classic and Contemporary Approaches. Dubuque, IA: Brown.
56
PhamM. T. (1998). Representativeness, relevance, and the use of feelings in decision making. J. Consum. Res.25, 144–159. doi: 10.1086/209532.
57
PhamM. T. (2007). Emotion and rationality: a critical review and interpretation of empirical evidence. Rev. Gen. Psychol.11, 155–178. doi: 10.1037/1089-2680.11.2.155.
58
PhamM. T.CohenJ. B.PracejusJ. W.HughesG. D. (2001). Affect monitoring and the primacy of feelings in judgment. J. Consum. Res.28, 167–188. doi: 10.1086/322896
59
PlaksJ. E.LevyS. R.DweckC. S. (2009). Lay theories of personality: cornerstones of meaning in social cognition. Soc. Personal. Psychol. Compass3, 1069–1081. doi: 10.1111/j.1751-9004.2009.00222.x
60
PrajapatS.SinghA. K. (2024). Exploring the evolution and impact of artificial intelligence in science fiction cinema: an overview with financial and economic context. Econ. Aff.69, 1097–1107. doi: 10.46852/0424-2513.3.2024.32
61
RamezaniM.TakianA.BakhtiariA.RabieeH. R.GhazanfariS.MostafaviH. (2023). The application of artificial intelligence in health policy: a scoping review. BMC Health Serv. Res.23:1416. doi: 10.1186/s12913-023-10462-2,
62
ReichT.KajuA.MaglioS. (2022). How to overcome algorithm aversion: learning from mistakes. J. Consum. Psychol.33, 285–302. doi: 10.1002/jcpy.1313
63
ReineckeM. G.KappesA.MannS. P.SavulescuJ.EarpB. D. (2025). The need for an empirical research program regarding human-AI relational norms. AI Ethics5, 71–80. doi: 10.1007/s43681-024-00631-2,
64
Rodriguez-SickertC. (2009). “Homo economicus,” in Handbook of Economics and Ethics, eds. PeilJ.StaverenI. (London: Edward Elgar Publishing), 223–229.
65
RusselS.NorvigP. (2020). Artificial Intelligence: A modern Approach (4th edition). New York, NY: Prentice Hall.
66
RybergJ. (2023). Sentencing disparity and artificial intelligence. J. Value Inq.57:9835. doi: 10.1007/s10790-021-09835-9
67
RybergJ. (2024). Criminal justice and artificial intelligence: how should we assess the performance of sentencing algorithms?Philos. Technol.37:9. doi: 10.1007/s13347-024-00694-3
68
SalatinoA.PrévelA.CasparE.Lo BueS. (2025). Influence of AI behavior on human moral decisions, agency, and responsibility. Sci. Rep.15:12329. doi: 10.1038/s41598-025-95587-6,
69
SanfeyA. G. (2007). Social decision-making: insights from game theory and neuroscience. Science318, 598–602. doi: 10.1126/science.1142996,
70
SanfeyA. G.RillingJ. K.AronsonJ. A.NystromL. E.CohenJ. D. (2003). The neural basis of economic decision-making in the ultimatum game. Science300, 1755–1758. doi: 10.1126/science.1082976,
71
SimonH. A. (1955). A behavioral model of rational choice. Q. J. Econ.69, 99–118. doi: 10.2307/1884852
72
SimonsohnU.NelsonL. D.SimmonsJ. P. (2014). P-curve: a key to the file-drawer. J. Exp. Psychol. Gen.143, 534–547. doi: 10.1037/a0033242,
73
SlomanS. A. (1996). The empirical case for two systems of reasoning. Psychol. Bull.119, 3–22. doi: 10.1037/0033-2909.119.1.3
74
SmithE. R.DeCosterJ. (2000). Dual-process models in social and cognitive psychology: conceptual integration and links to underlying memory systems. Personal. Soc. Psychol. Rev.4, 108–131. doi: 10.1207/S15327957PSPR0402_01
75
StanovichK. E.WestR. F.ToplakM. E. (2011). “Intelligence and rationality,” in The Cambridge Handbook of Intelligence, eds. SternbergR. J.KaufmanS. B. (Cambridge: Cambridge University Press), 784–826.
76
StanovichK. E.WestR. F.ToplakM. E. (2012). Judgment and decision making in adolescence: separating intelligence from rationality. Am. Psychol. Assoc. eBooks8, 337–378. doi: 10.1037/13493-012
77
SuleimanR. (1996). Expectations and fairness in a modified ultimatum game. J. Econ. Psychol.17, 531–554. doi: 10.1016/s0167-4870(96)00029-3
78
ThalerR. H. (1988). Anomalies: the ultimatum game. J. Econ. Perspect.2, 195–206. doi: 10.1257/jep.2.4.195
79
TreimanL.S.HoC.-J.KoolW. (2025). “Do people think Fast or slow when training AI?” in Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, 2728–2750.
80
TullyS.LongoniC.AppelG. (2025). Lower artificial intelligence literacy predicts greater AI receptivity. J. Mark.89, 1–20. doi: 10.1177/00222429251314491
81
UNESCO (2024). Generative AI: UNESCO Study Reveals Alarming Evidence of Regressive Gender Stereotypes. Paris: UNESCO.
82
UpadhyayaN.GalizziM. M. (2023). In bot we trust? Personality traits and reciprocity in human-bot trust games. Front. Behav. Econ.2:1164259. doi: 10.3389/frbhe.2023.1164259
83
VanBergenN.LurieN. H.ChenZ. (2022). More rational or more emotional than others? Lay beliefs about decision-making strategies. J. Consum. Psychol.32, 274–292. doi: 10.1002/jcpy.1244
84
VijayakumarS.LiuQ.YuweiJ. (2023). “How recommender’s age affects utilitarian vs. hedonic perceptions of a recommended product,” in Advances in Consumer Research. ACR 2023, eds. ChaplinL. N.RaghubirP.WilcoxK. (London: Association for Consumer Research), 677–678.
85
von WalterB.KremmelD.JägerB. (2021). The impact of lay beliefs about AI on adoption of algorithmic advice. Mark. Lett.33, 143–155. doi: 10.1007/s11002-021-09589-1
86
WangL.MurnighanJ. K. (2017). How much does honesty cost? Small bonuses can motivate ethical behavior. Manag. Sci.63, 2903–2914. doi: 10.1287/mnsc.2016.2480
87
YalcinG.KimS.PuntoniS.van OsselaerS. M. J. (2022). Thumbs up or down: consumer reactions to decisions by algorithms versus humans. J. Mark. Res.59, 696–717. doi: 10.1177/00222437211070016
88
YuS.XiongJ.ShenH. (2024). The rise of chatbots: the effect of using chatbot agents on consumers’ responses to request rejection. J. Consum. Psychol.34, 35–48. doi: 10.1002/jcpy.1330
89
ZhangG.ChongL.KotovskyK.CaganJ. (2023). Trust in an AI versus a human teammate: the effects of teammate identity and performance on human-AI cooperation. Comput. Hum. Behav.139:107536. doi: 10.1016/j.chb.2022.107536
90
ZhouJ.PoratT.Van ZalkN. (2024). Humans mindlessly treat AI virtual agents as social beings, but this tendency diminishes among the young: evidence from a cyberball experiment. Hum. Behav. Emerg. Technol.2024:8864909. doi: 10.1155/2024/8864909
Summary
Keywords
AI, artificial intelligence, cognitive style, decision-making, emotion, game theory, reason
Citation
Vijayakumar S, Yang WY and DeFranza D (2026) Lay belief about AI and its decision-making. Front. Comput. Sci. 8:1768435. doi: 10.3389/fcomp.2026.1768435
Received
15 December 2025
Revised
23 February 2026
Accepted
24 February 2026
Published
17 March 2026
Volume
8 - 2026
Edited by
Tailia Malloy, Universite du Luxembourg - Campus Kirchberg, Luxembourg
Reviewed by
Eun Ho Kim, Dong-A University - Bumin Campus, Republic of Korea
Yulia Litvinova, Max Planck Institute for Intelligent Systems, Germany
Updates
Copyright
© 2026 Vijayakumar, Yang and DeFranza.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Suhas Vijayakumar, suhas.vijayakumar@ucd.ie
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.