CONCEPTUAL ANALYSIS article

Front. Hum. Dyn., 09 July 2025

Sec. Digital Impacts

Volume 7 - 2025 | https://doi.org/10.3389/fhumd.2025.1554731

This article is part of the Research TopicHuman-Artificial Interaction in the Age of Generative AIsView all 4 articles

The coopetition of human intelligence and artificial intelligence through the prism of irrationality

  • 1Université du Québec à Rimouski, Rimouski, QC, Canada
  • 2Faculty of Law, Economics and Social Sciences, University Hassan II of Casablanca, Casablanca, Morocco

Artificial intelligence (AI) is sometimes seen as a threat to humans, posing both ethical challenges and job losses; sometimes, as an opportunity. The aim of this article, which is purely conceptual, is to understand the AI-Human Intelligence (HI) relationship from a cooperative, co-operative and competitive perspective, through the rationality-irrationality dialog between the two forms of intelligence. Humans can never be as rational as AI. Consequently, if it hides its irrational component, it could compete with AI, but in this game, humans are sure to lose. It is this irrationality (in permanent dialogue with rationality) that should be valued to enable human-machine coopetition.

1 Introduction

This article was entirely written using human intelligence (HI) at a time when the debate on the supremacy of the intelligence of the machine soon arouses several desires. Artificial intelligence (AI) is increasingly perceived as a major threat to HI. Several sectors have undergone an unprecedented metamorphosis (services sector, industry, agriculture, education, finance, medicine). AI is defined as a third wave of automation that affects the cognitive dimensions long reserved for HI. The first wave dealt with manual and hazardous tasks and the second with repetitive and monotonous tasks (Davenport and Kirby, 2016).

Automation is not a new phenomenon in human-machine relationships. Its combination with the superpower of the machine, which has increasingly large computing capacity and therefore a deeper learning, makes it possible to question the coexistence of AI and HI. The questions of reflection that center on a possible substitution of the human by the artificial. The theory of limited or bounded rationality dear to Herbert Simon makes it possible to reject this possible hypothesis. AI is viewed as a consequence of HI, which has the potential to evolve. Thus, we encounter a recursive causality, as described by Morin (2001) that makes the HI produce the AI that produces it.

The literature on AI and its relationship with HI can be articulated around the theses of substitution and singularity. Some support the idea of replacing HI with that of the machine. This explains a possible disappearance of the usefulness of the first in favor of the second. In management sciences, Baumard (2019) focused on the possibility of AI to theorize organizations. If AI has demonstrated an ability to highlight a sophisticated automation capability, however the objective of automation is to extract a human heuristic capacity, influenced by irrationality, to transform it into a machine heuristic ability, supposedly rational. The disappearance of one form of intelligence and the supremacy of the other is utopian. Both are complementary and must coexist dialogically even in the presence of some apparent competition. As for singularity, it refers to the superpower of the machine that can simulate HI. The concept of the learning machine is proof of this.

In this article, we do not position ourselves on the side of the thesis of substitution of the human by the artificial, nor that of the singularity of the AI which often results in its supremacy and the extinction of human natural intelligence. We consider the coexistence of the two intelligences to serve humanity well. This presupposes the collaboration of both intelligences without excluding competition to perform a greater range of tasks and demonstrate a form of situational supremacy of one form of intelligence over another.

The objective of this article, which is purely conceptual, is to understand the AI-HI relationship in a coopetitive, cooperative and competitive perspective, through the dialogical rationality—irrationality of the two forms of intelligence. Coopetition is a concept that results from the contraction between cooperation and competition. The concept is introduced for the first time in the management literature by Ray Noorda, the founder of Novell. The relevance of the concept lies in its ability to integrate both competitive and cooperative interactions.

2 Basic concepts: human intelligence and artificial intelligence, what are we talking about?

In recent years, there has been a craze for conceptual thinking around AI. On the one hand, AI is defined as the ability of a system to correctly interpret external data, learn from that data, and use that new knowledge to achieve specific goals or tasks, while flexibly adapting (Nah et al., 2023). AI can also be characterized as the ability of an artificial system composed of algorithms and software programs to achieve predetermined specific goals and tasks (Chowdhury et al., 2023). On the other hand, the OECD defines AI as a (general purpose technology (GPT)) (OECD, 2019) because of its ability to generate predictions that can be used in decision-making in different activities such as education, radiology and translation and many others (Agrawal et al., 2019). AIs are calculators, so they work mechanically, step by step. These procedures are called algorithms (Müller, 2024).

As terminology, AI was first used in 1955 (Nyholm, 2024). The goal is not HI, but rather to simulate it (Nyholm, 2024). After a relatively slow evolution, AI is now experiencing a dramatic change. During the first two decades, AI was found in General Problem Solver (Nah et al., 2023). Over the past decade, the use of AI has accelerated with the phenomena of Big-Data, Cloud Computing associated with the growing capacity of data storage and learning machine (Learning Machine), with in 2015, the introduction of AlphaGo, or in 2022, ChatGPT (Nah et al., 2023). Today, AI can take many forms: robotic automation processes (for example, cobots in warehouses), computer vision techniques, speech recognition, machine learning and deep learning algorithms (Budhwar et al., 2023). The GPT category refers to Large Language Models (LLMs) that use deep learning with a considerable amount of data (Nah et al., 2023). Generative Adversal Networks (GANs) are new generative models. These models consist of two competing neural networks: a generator and another discriminator. The generator produces the most realistic data possible, while the discriminator tries to differentiate synthetic data from real data (Nah et al., 2023). Another important aspect is the ability of AI to self-improve and expand increasingly into cognition, understanding, learning and even decision-making (OECD, 2019).

AI brings its share of challenges, whether ethical, technological, related to regulatory policies, or economic, particularly in relation to the job market (Nah et al., 2023). From an economic perspective, AI creates challenges and opportunities for the labor market. Its influence is 2-fold because it is deep (depth) and enormous (scale). It will lead to greater losses in a wide range of sectors. On the other hand, the ability of AI to generate new innovations, including how to create new ideas and technologies that help solve complex problems, can lead to the birth of new industries and create countless new jobs (OECD, 2019).

The first idea that comes to mind about the impact of the artificial on the human in organizations is this fear of massively destroying jobs. The machine whose intelligence (programmed and limited) is artificial would replace the human of intelligent nature. If the rationality of the human is limited, his intelligence is unlimited. The fear of AI is explained by its rapid spread, which makes it possible to automate almost all activities traditionally carried out by individuals. At the same time, it is important to consider AI as different from the technologies established in recent decades and whose main mission was to automate repetitive tasks. With AI, the machine becomes intelligent, learning and interactive featuring exceptional data storage, visual recognition, and decision-making capabilities approaching optimality. Some even envisage the feasibility of a substitution of man by machine to produce research. The idea is to know what AI could do in theory (Baumard, 2019).

While speeches about robotic replacement and end-of-work have survived to this day, increasingly people are speaking out against the idea that technological innovation is necessarily conducive to job loss Gordon and Gunkel (2025). Indeed, for example, this is not the case for digital workforce designating human activities, in the digital sector, particularly necessary on the last mile (Casilli, 2021). Thus, the elements linking AI to employment are paradoxical: both job creators and job destroyers (Budhwar et al., 2023; Chowdhury et al., 2023). AI is transforming human work in all sectors of industry, services and agriculture and mining. Highly skilled activities such as radiologists, laboratory technicians, engineers, lawyers, etc. are more exposed to the effects of AI than others. This in no way means that these professions are threatened by disappearance. They will undergo profound transformations and the actors in these professions will play different roles.

Some activities are more exposed to AI than others. This does not mean that the artificial will replace the human. Others are “less exposed,” which does not mean they will escape the wave of automation. Thus, for Webb (2020), highly skilled jobs including lab and clinic technicians, optometrists and chemical engineers are most impacted by AI. Similarly, production activities involve quality control. These activities represent, for Webb (2020), a minor portion of the low-skilled jobs that may be impacted by this technological wave. Some activities are less exposed because of the supremacy of HI (for example researchers), or activities that require interpersonal skills (teachers and managers) including manual activities such as catering, or massage and therapy.

Humans are traditionally perceived to be smarter than other animals, however, today some believe that AI can make them less intelligent (Nyholm, 2024). Müller (2024) defines intelligence as the ability to flexibly pursue goals and where flexibility is explained with the help of different environments. He distinguishes this definition from a more traditional one where intelligence measures an agent's ability to achieve objectives in a wide range of environments. Griffiths (2020) explains that the uniqueness of HI is based on three limitations: limited time, limited computation, and limited communication. AI does not have these biological limitations, but according to Griffiths (2020), this does not mean that AI is not intelligent (Griffiths, 2020). Nevertheless, Acharjee and Gogoi (2024) conclude that HI is limited while still remaining superior to AI.

Müller (2024) explains that artificial machine intelligence responds to the instrumental and normative vision of intelligence found in the traditional definition. If intelligence is defined as a form of behaviors or capabilities for complex behaviors, then technology could potentially become intelligent. However, if intelligence is regarded as a pathway that refers to a conscious or subjective state, then it would seem implausible that technology could match natural intelligence (Nyholm, 2024).

3 Literature analysis and synthesis: the dialogical perspective of the human and the artificial

The coopetition of the IA-HI relationship is grounded in at least two principles of complex thinking: dialogical and recursive. For Morin (1977), the first dialogical principle is a symbiotic unity of two logics, which at the same time feed each other, compete, parasitize each other, oppose and fight each other to death. The principle of recursive causality considers any product as a producer and any cause consequently. Like the egg and the chicken, HI produces AI, which in turn produces HI. The AI-HI coopetition uses the dialogical and recursive principles of complex thinking.

3.1 Toward a dialogical and recursive approach AI-HI

As Schmitt (2021) clearly does in his debate on order and disorder, it seems possible to consider both intelligences as the obverse and the reverse of the same coin. So, the idea of separation of the two intelligences by substitution of one by the other or even supremacy is excluded. Indeed, artificial-human coopetition is defined as a dialogic, in the sense of Morin (1982), because the two forms of intelligence coexist and unite without duality being lost in unity. Indeed, despite the dazzling rise of the artificial, its adoption of AI by different industries is still in its embryonic phase. The industries that are more advanced in the adoption of AI are ICT, automotive & assembly, telecoms, transport and logistics, financial services, consumer goods packaging, health services, etc. (Besson, 2018).

In the logic of linear causality of cause-effect type, one can hope 1 day the disappearance of one of the forms of intelligence. Under the prism of recursive causality, it is possible to consider that the HI produces the AI and it is today the cause of high levels of HI. With recursion, “the effects or products are at the same time causative and producer in the process itself, and where the final states are necessary for the generation of the initial states. Thus, the recursive process is a process that occurs/reproduces itself provided it is fed by an external source, reserve or flow” (Morin, 1986, p. 101, translation).

The self-improvement enabled by AI makes this technology unique compared to other automation technologies. For Nordhaus (2015), the theory of the singularity of AI often emanates from computer science. AI will always depend on the HI and the resulting decisions (Julia, 2019). Some economists (Brynjolfsson and McAfee, 2014) propose a soft version of singularity theory. The two researchers tested a variety of hypotheses that show an acceleration of growth that results from the technology. But they could not support their conclusions theoretically. Thus, the capital/output ratio is not rising rapidly, the decline in the cost of capital has not accelerated, and productivity growth is not rising. In short, we could only refer to singularity in 100 years. This conclusion rhymes with another that predicts the automation of all jobs in about 120 years (Grace et al., 2017).

3.2 Beyond substitution and singularity, coopetition!

AI does not only lead to the automation of human tasks, but also to the completion of HI (Nyholm, 2024). In this article, we do not accredit the thesis of the substitution of the human by the artificial, nor the singularity of the AI which often translates into its supremacy. We consider the collaboration of the two intelligences without excluding forms of competition to perform more tasks performed and demonstrate a form of situational supremacy of one form of intelligence over another. For example, AI can help doctors analyze a large amount of medical data, such as medical imaging or laboratory results. However, the uses of AI are not without problems (King, 2023; Nah et al., 2023).

There are many possible avenues for collaboration between humans and generative AI. For example, teachers can use AI in their teaching, while being mindful of the resources generated by AI (Nah et al., 2023). Hitsuwari et al. (2023) have shown that the HI-AI collaboration allows for greater creativity in the production of haiku, poems of Japanese origin. However, LMDs have proven to perform poorly in all tasks that require cognitive manipulation of world knowledge. For this reason, they have been described as stochastic parrots, that is, these architectures can only learn sophisticated and long-range sequential probabilities. With such capabilities, all the essential frontal functions that are generally considered the quintessence of the human being, cannot even be approached (Farina et al., 2024).

According to Floridi (2023), artificial agents achieve goals, but do not possess ways of thinking like HI. This HI is used by humans to achieve their goals. AI technologies mimic, simulate or can act, but have nothing to do with HI (Nyholm, 2024). In addition, Nyholm (2024) concludes that humans could take credit for what they can do with their expanded minds. Technologies fail to expand the minds of humans but allow them to act as if these humans had improved their cognitive abilities and could be called artificial cognitive improvements. But it is less plausible to think that they truly constitute a form of AI as truly constituting a form of human improvement. For example, ChatGPT operates too independently to qualify this type of AI as an extension of the user's brain (Nyholm, 2024).

The research of Brynjolfsson et al. (2018), explores these artificial cognitive improvements and argues that several activities are performed by humans with a greater or lesser use of machine intelligence. This is the case of an economist who must write a report, interpret it, make recommendations, translate them into an action plan. Everything is done based on predictive models that are only possible through AI. This allows the economist to make advanced calculations of big data. The two intelligences co-exist, strengthen, feed, mutualize, complement and compete. One improves the productivity of the other. The AI-HI relationship is then a dialogical and recursive logic of coopetition that combines collaboration and competition.

Rationality is therefore an important element of differentiation between humans and AI. It can be an element to consider in a coopetition between the two.

4 Discussion: toward an AI-HI symbiosis through irrationality

This discussion will take place in three stages, first a proposal development, placing the AI-HI relationship via the concepts of rationality and irrationality. Thereafter, the contribution and limits will be presented.

4.1 Proposal development: HI-IA between rationality and irrationality

According to O'Doherty (2020), management science dogmatizes the pursuit of rationality and reason. Most analyses assume that people act rationally (Holley, 2018). Reason has a prominent place in Western societies (Heitz, 2014). Indeed, these values of rationality could allow actions and thoughts on neutral bases, it is an ideal norm (Jarzabkowski and Kaplan, 2015). But for Jarzabkowski and Kaplan (2015), it is problematic to associate incorrect and irrationality.

4.1.1 Rationality and its limitations

According to classical rationalism, everything can be explained rationally (Heitz, 2014). Cartesianism would be synonymous with the fact that our actions and thoughts are inspired by rationality (Heitz, 2014). “By reason, we mean a set of rules of action that regulate the behavior of actors, according to moral values, while allowing them to clearly see the goals to be achieved and the necessary means of their implementation” (El-Gharbi and Khefacha, 2009, p. 76). In economics, utility functions require satisfying mathematical properties such as continuity, monotonicity or quasi-concavity. These properties imply a rationality in the process, even if this may not be the case in the finalities (Gilboa et al., 2012). In AI, it is common to perceive AI agents as rational agents (Müller, 2024) using a substantive rationality.

A rational agent will perceive his environment, find the options available to him and choose the best decision. This is a normative view where agents, in this case, should act with the information they have; it is not a descriptive theory to see how agents act (Müller, 2024). Rationality has a strong meaning and a weak one. In the weak sense, rationality is the state or quality of being in accord with reason. This type of rationality, in line with Aristotelian thought, stipulates that the human is the only animal that acts rationally. Its opposite is not irrationality, but arationality. Animals are both rational and arational, unlike humans who are only rational (Stanovich, 2011). In its strong sense, rational thinking is a normative notion, its opposite is irrationality and not arationality, and only humans can be irrational (not other animals). In this case, rationality (and irrationality) would come from the distance of behavior or thought from an optimum defined in a normative model (Stanovich, 2011).

However, biologists Konaka and Naoki (2023) explain that humans, like all other animals, are not always rational. Indeed, they do not just rationally exploit rewards, they also explore an environment out of curiosity. However, the mechanism of this irrational behavior motivated by curiosity is largely unknown. Animals and humans perceive the outside world through their sensory systems and make decisions accordingly. Often, they cannot make optimal decisions because of the uncertainty of the environment and the limited capacity of the brain, and the time constraints associated with decision making. In addition, they perform irrational actions (such as playing gambling even if the expectation of gain is low) that are explained by curiosity. But this mechanism of this irrational behavior motivated by curiosity is largely unknown. Their model describes irrational behaviors according to the level of curiosity.

Descartes (1724/1974), while demonstrating the primacy of rationalism, developed the idea of doubt. This methodical doubt consists in rejecting any element of knowledge that is not certain. It is therefore obvious that for the moment, no AI can implement this methodical doubt. This is why some content generated is not credible. In addition, ChatGPT easily misleads humans about the fact that it is not a human who responded to the requests (Nah et al., 2023), and humans find it difficult to doubt the content generated by LLMs.

Simon (1986) distinguishes the conceptions of rationality among economists and psychologists. Indeed, economists treat human behavior as rational, while psychologists are interested in rational and irrational aspects. These important differences in the conceptualization of rationality are based on a fundamental distinction: in economics, rationality is considered in terms of the choices it produces; in other social sciences, it is considered in terms of the process it employs. Economic rationality is a substantive rationality, while psychological rationality is a procedural rationality (Simon, 1986). Simon (1979) develops the idea that rationality is procedural, limited and intuitive. Thus, he abandons the classical postulates of optimization and maximization (maximizing), preferring the idea of satisfaction (satisficing) (Martignon et al., 2022). However, Simon (1979) explains that the definition of irrationality written by Becker (1962) that corresponds to any deviation from the maximization of utility corresponds to what he called limited rationality.

Stetzka and Winter (2023) focus on a continuum of Rationality—Limited Rationality and Irrationality. They explain that while the distinction between rationality and limited rationality is quite simple and has a long tradition in economics, irrationality is difficult to define. If we do this simply by the absence of total rationality, it does not make it possible to draw a clear line between irrationality and limited rationality. They prefer, therefore, to consider that if people can control their behavior, they will be considered rational or limited rationality, otherwise, they will be defined as irrational.

4.1.2 Irrationality

El-Gharbi and Khefacha (2009) present different perspectives of philosophical, economic, sociological, or managerial irrationality. Some accept this irrationality as natural; others see it as a bias to avoid. For Gilboa et al. (2012), irrational beliefs are those that are contradicted by evidence; and similarly, irrational choices are those that conflict with reason.

4.1.3 Bias to avoid

Irrationality, seen antagonistically to the rationality that represents an ideal, would be a notion negatively marked by many philosophers (El-Gharbi and Khefacha, 2009). Descartes defends clear and distinct ideas and explains that imagination is the basis of errors and falsehoods (Heitz, 2014). According to Graf et al. (2012), executives sometimes sacrifice their profits for the sole purpose of improving their relative competitive position, a behavior known as competitive irrationality. These authors wanted to limit this kind of behavior by adopting the logic of limiting biases, creating a responsibility, considering the opposite, making the bias of competitive irrationality salient for the decision-maker, reducing time pressure and relying on external advice. They tested their hypotheses on a sample of 934 managers using online experiments. Thus, their results indicate that efforts to make managers responsible for their actions can have a detrimental effect on the quality of decisions, which was contrary to their theory.

Hannon et al. (2024) explain that non-use or resistance to the use of algorithmic decision-making systems is regarded as an obstacle to achieving optimum productivity and efficiency, while they have shown that this resistance can allow a better alignment of these systems with the needs and values of society.

4.1.4 Human irrationality

Any good decision is not automatically a rational decision, and wanting to absolutely follow rational decisions would be unrealistic according to Geoffroy (2012). Normative research has led to a growing consensus among researchers on the types of decisions that should be described as rational. At the same time, empirical research has found ample evidence of decision-making processes that appear irrational considering normative norms. Moreover, apparent irrationalities are not limited to insignificant decisions: people behave the same way when they make important decisions on strategic issues. It can even be said that apparent irrationalities are most important in major decisions (Brunsson, 1982). El-Gharbi and Khefacha (2009) based on the dictionary Le Petit Robert, explains that irrationality concerns what is not consistent with reason. This notion is associated with an abnormality.

However, Starbuck (2004) questions the primacy of rationality in a realistic world view. Decision-making processes are a combination of rational and irrational parts (Altanlar et al., 2023). According to Heitz (2014), irrational actions can be defined as intentionally acting in contradiction with the chosen norms and values. Among the elements that explain these behaviors are passion, desire, pleasure. Irrational behaviors are not at all marginal, but rather pervasive in our organizations (El-Gharbi and Khefacha, 2009). Humans are not fully in control (rational) or out of control (irrational) (O'Doherty, 2020). Irrationality is a key component of human behavior (Holley, 2018).

Several notions are often opposed to rationality. Among them are mysticism, the unconscious, acquiescence, empathy. For some authors, for example, intuition assumes magical thoughts or elements that appear in inexplicable or mysterious ways. Although there is no consensus on this notion, it is often associated with reason and logic (Shirley and Langan-Fox, 1996). Abbas (2020) distinguishes behavioral economists who rely on limited rationality and Islamist economists for whom the main purpose of humans is God. This mystical conception is also found among scholars claiming other religious allegiances, for example Melé and Cantón (2014).

If irrationality is defined as a thought, emotion or behavior that leads to adverse consequences for the individual or that significantly interferes with the survival and happiness of the organism, we find that hundreds of major irrationalities exist in all societies and in virtually all humans of these societies. These irrationalities persist despite the conscious determination of people to change; many of them oppose almost all the teachings of the individuals who follow them; they persist in very intelligent, educated and relatively quiet people; when they abandon them, they usually replace them with other irrationalities (Ellis, 1975). According to Kets de Vries (1994), wise leaders realize that unconscious and irrational processes affect their behavior. They recognize the limits of rationality and become more aware of their own character traits. Thus, leaders who ignore their irrational side are like captains who blindly navigate their ship in a field of icebergs; the greatest dangers being hidden beneath the surface. In this way, Kets de Vries (1994) associates the unconscious with irrationality. However, being interested in the unconscious and the imaginary does not mean giving up rationality (De Swarte, 2013). It all depends on how we define rationality.

Almost paradoxically (irrational), magic and technologies (which can be associated with rationality) respond to the same human need to reduce the world's contingency (Larsson and Viktorelius, 2024). In another area, Walco and Risen (2017), two psychologists, supported the concept of acquiescence, that is, that people can explicitly recognize that their intuitive judgment is wrong and that despite everything, they maintain it. They have shown that people can have a false belief in the world, recognize that their belief is irrational, but follow their intuition anyway, even at a cost. Rationality and intuition are important elements in effective decision-making (Thanos, 2023).

The complex nature of the human being deserves to be addressed through paradoxes, notably by going beyond rationality and considering irrationality (Bednarek et al., 2021). “It would be insane and delirious irrational to hide the insane and delirious irrational component of the human being” explains Morin [2001, p. 132, translation]. For example, if we imagine inserting human behaviors in a rational grid, it could turn into its opposite (irrationality), because it will degenerate into rationalization (Morin, 2001), even if in organizations today, the irrational part is hidden, so it is important to let believe that only the rational could explain everything. Indeed, unreasoning is at the heart of rationalization. Moreover, what is called irrational is also ambiguous, so there is a residue that Morin (1996) calls irrationalisable, which cannot be translated into the logical categories of our understanding.

4.1.5 AI-HI: between rationality and irrationality

AI systematically uses algorithms so rational processes (Müller, 2024). However, these processes may use invalid or even irrational information (Nah et al., 2023), so it is possible that AI creates content that can be considered irrational. Thus, the process of AI cognition is rational, but the results can be irrational. Moreover, the instability of results generated by deep learning is a well-known limitation currently (Colbrook et al., 2022).

This differs from humans who are both rational and irrational (Morin, 2001) in their processes (and results). According to Acharjee and Gogoi (2024), a distinction between AI and HI arises from the human ability to perform non-deductive reasoning. Thus, only HI is capable of abductive reasoning. Abduction involves a non-linear process, unlike induction and deduction (Bellucci and Pietarinen, 2020). Abduction is the only logical operation that allows the introduction of a new idea and suppose induction and deduction (Peirce, 1893-1913/1998). It enables us to question how an unexpected phenomenon comes to exist (Chew, 2020). When Peirce refers to abduction as being based on instincts, it does not contradict its logical structure of argumentation because it is possible to arrive at plausible theories (Chew, 2020). Abduction combines emotions and intuitions with logic, taking HI out of rational procedures (Fortes, 2023). This irrational component makes it possible to differentiate a human from any other AI. It participates in the beauty of the human, in its unpredictability. AI is rational in essence, although it may appear irrational (Heiland, 2023), the process is deductive (Acharjee and Gogoi, 2024). These differences have effects on different levels. We will develop three: creativity, accountability and professional judgment.

4.1.5.1 Creativity

Creativity is traditionally considered an exclusive ability of HI. However, the rapid development of AI has given rise to generative chatbots capable of producing high-quality works of art (Koivisto and Grassini, 2023). Thus, currently, the impact of AI is felt unevenly; some “authors” such as artists and composers working at the bottom of the scale of cultural production are more likely to face competition from AI. As for performers, or those whose creativity is produced on and by their “bodies,” it is physically impossible to replace their embodied creativity with the creative process of AI.

Nevertheless, it would be interesting to examine current attempts to create digital performers such as actors and singers. As these digital characters are not created by self-taught AIs, they require substantial creative human contribution, which is costly (Lee, 2022). To overcome this competition from AI to humans, in terms of creativity, Vinchon et al. (2023) propose a manifesto to allow a harmonious collaboration between AI and HI. Jia et al. (2024) explain that humans can be more creative with AI, but this is only found among those with higher professional skills.

Anyway Wu et al. (2021) emphasize the importance of cooperation between AI and HI in a creative process. Indeed, based on a stochastic statistical model, Sæbø and Brovold (2024) suggest that AI does not have the autonomous creativity capabilities of humans. Koivisto and Grassini (2023) compared the creativity of humans to that of three chatbots using a common divergent thinking task. Participants were asked to find unusual and creative uses for everyday objects. On average, chatbots performed better than human participants. However, the best human ideas have matched or surpassed those of chatbots. Thus, AI can be seen as a tool for enhancing creativity, however, it reinforces the idea of the unique and complex nature of human creativity, which could be difficult to replicate or surpass with AI (Koivisto and Grassini, 2023). All this reinforces the idea of coopetition between AI and HI.

4.1.5.2 The (de)accountability of the decision

Humans exhibit irrational decision-making patterns in response to environmental triggers, such as the experience of economic loss or gain. Decision-making patterns can be used as behavioral signatures to distinguish humans and algorithms. The algorithm has become more risk-averse and more rational, while the human has become more risk-seeking and irrational. Thus, the implementation of rationalizing tools can create a mirage of a world that would be completely rational and predictable, although it is increasingly recognized that this predictability is impossible. This implies the need to manage both in order and disorder.

Indeed, “a human society cannot be totally subject to a programmed mechanical order” (Morin, 2001, p. 219, translation). Order is much more secure than disorder. In our modern organizations, order is preferred, it is even often reified. Indeed, in many organizations, we will want zero paper, tidy offices (signs of modernity). An organization without disorder can give the appearance of an organization without problems, an organization that masters its field of activity. A manager who looks after the order will be more taken into consideration, will appear more serious.

However, “disorder means not only aggression, delinquency, but also freedom, intuition and even creativity” (Morin, 2001, p. 221 translation) and “Freedom can only be exercised in a situation involving both order and disorder” (Morin, 2001, p. 308 translation). Order is important, but we should not forget its counterpart. It is therefore essential not to put order on a pedestal, and to accept positively the organizational disorder. This irrationality makes humans unpredictable. But this unpredictability is scary, people, especially at work, would prefer to be surrounded by predictable beings, much easier to manage and less anxious.

Obscuring the irrational aspect of the human in organizations often leads to this degeneration into rationalization. For example, in many organizations, decision support tools become decision-making tools. Often these tools use quantitative elements that can become a true religion. These quantitative elements, how relevant as a basis for reflection, are constructions. They are not neutral; they need to be contextualized to be interpreted. However, in a desire to eliminate uncertainties, people at work will give these quantitative tools a power that they do not have, but that humans would like them to have. Thus, we eliminate all responsibility in an uncertain decision: “it is not us, it is the tool, it is objective!”. This rationalization is only the extreme form of an irrationality of rationality. People, within organizations, would rather have to win and accept irrationality, not to bury their heads in the sand, thinking that if we hide it, it will disappear.

4.1.5.3 Professional judgment

In this sense, the use of professional judgment may seem important to achieve a realistic position (Albert and Michaud, 2016). The excesses of rationality in organizations have been noted, analyzed, and criticized for a long time (for example Tsoukas and Knudsen, 2005). A mechanistic approach, through its determinism, emphasizes the respect of pre-established criteria. Thus, it would be possible to control its environment. This way of seeing the world is very attractive, because it allows us to overcome the very frequent fear in humans, the uncertainty of paradoxical situations (Bauman, 1991). Professional judgment is more than an instrumental implementation of rationality. It requires sensitivity to transferable and intersubjective phenomena, as well as to unique people, contexts and practices that always transcend rules and categories (Todres, 2008). Professional judgment must therefore contextualize generic knowledge. It is an act of decision, a disposition to be decided based on a careful examination of theoretical knowledge and skills. Thus, he builds bridges between the universal terms of theory and the peculiarities of situated practice (Johnson and Reiman, 2007).

Problems in organizations are often complex and indeterminate, often with no clear solution (Schön, 1992); as a result, there is no easy, mechanistic procedure to escape the embodied decision of flesh and blood. According to Facione et al. (1999), the exercise of professional judgment requires both the will and the ability to think critically. Thus, it seems unlikely that AI can develop its professional judgment, but it can give tools to help professional judgment. Faulconbridge et al. (2023) have shown significant human abilities to synthesize information, cope with ambiguity, make creative and context-sensitive decisions, reassure others, and demonstrate empathy, In short, professional judgment. However, this can be done by leveraging the benefits of AI and replacing some elements of professional work, through AI. Their analysis reveals that AI-HI intertwined work allows automation to be integrated in a way that both admits to exploiting its benefits in context-appropriate, personalized and creative decision-making (Faulconbridge et al., 2023), although AI can sometimes appear to erode professional judgment (Hoeyer and Wadmann, 2020). Thus, it is a real coopetition between AI and HI.

4.2 Contribution to research

Irrationality is therefore an important element of differentiation between HI and AI. It can be an element to consider in a coopetition between the two.

Humans can never be as rational as AI. Therefore, if it avoids its irrational component, it could be in competition with AI on rational aspects, but in this game, the human is losing for sure. Irrationality in humans is studied in terms of its biases (Gulati et al., 2022), and how AI can counteract its biases (Kliegr et al., 2021). Human irrationality is not limited to cognitive biases and the impossibility of modeling this irrationality is both what makes its study difficult, but also its beauty. It is this irrationality (in constant dialogue with rationality) that should be valued to allow human-machine coopetition.

4.3 Limitations and future research pathways

This conceptual paper has inherent limitations, not supported by an empirical approach. It would be relevant in the future to be able to empirically explore this subject, even if it will be difficult to observe these elements of rationality and irrationality, given the often-non-expletive nature of irrationality. Thus, it is not the results that should be analyzed, but rather the processes. It would therefore be possible to evaluate their deductive, inductive, or abductive nature. Some processes of HI are abductive and depend on irrational elements, and these processes could be questioned. It would also be relevant to study the complementarity of the deductive processes of AI and the abductive processes of HI.

5 Conclusion

Artificial intelligence is taking an increasingly important place in the lives of humans and shaking up the lives of the latter. Some view these machines as possessing the potential to transcend human abilities. While others imagine human defeat in a war with AI. The objective of this article was to understand the AI-HI relationship from a cooperative, competitive, cooperative perspective, through the dialogical rationality—irrationality of the two forms of intelligence. Indeed, it has been shown that humans can never be as rational as AIs. Thus, humans will always bow in a competition on rationality, with AI. However, by accepting elements of irrationality (in permanent dialogue with rationality) a human-machine coopetition can be realized. We were able to highlight this coopetition in three areas: creativity, decision-making accountability and professional judgment. Humans and machines have many things to build together, or not!!

Author contributions

M-NA: Writing – original draft, Writing – review & editing. SK: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Acknowledgments

We acknowledge Brian Hunt from Clayton State University, USA, for his proofreading.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abbas, M. H. I. (2020). A rational irrationality: reviewing the concept of rationality in conventional economics and Islamic economics. Al-Amwal 12, 77–85. doi: 10.24235/amwal.v1i1.6202

Crossref Full Text | Google Scholar

Acharjee, S., and Gogoi, U. (2024). The limit of human intelligence. Heliyon 10, 1–16. doi: 10.1016/j.heliyon.2024.e32465

PubMed Abstract | Crossref Full Text | Google Scholar

Agrawal, A. J., Gans, A., and Goldfarb, A. (2019). Artificial intelligence: the ambiguous labor market impact of automating prediction. J. Econ. Perspect. 33, 31–50. doi: 10.1257/jep.33.2.31

Crossref Full Text | Google Scholar

Albert, M. N., and Michaud, N. (2016). From disillusion to the development of professional judgment: experience of the implementation process of a human complexity course. Sage Open 6:2158244016684372. doi: 10.1177/2158244016684372

Crossref Full Text | Google Scholar

Altanlar, A., Amini, S., Holmes, P., and Eshraghi, A. (2023). Opportunism, overconfidence and irrationality: a puzzling triad. Int. Rev. Fin.Anal. 88:102643. doi: 10.1016/j.irfa.2023.102643

Crossref Full Text | Google Scholar

Bauman, Z. (1991). The social manipulation of morality: moralizing actors, adiaphorizing action. Theory Cult. Soc. 8, 137–151. doi: 10.1177/026327691008001007

Crossref Full Text | Google Scholar

Baumard, P. (2019). Quand l'intelligence artificielle théorisera les organisations. Rev. Fran. Gest. 135–159. doi: 10.3166/rfg.2020.00409

Crossref Full Text | Google Scholar

Becker, G. S. (1962). Irrational behavior and economic theory. J. Polit. Econ. 70, 1–13. doi: 10.1086/258584

Crossref Full Text | Google Scholar

Bednarek, R., e Cunha, M. P., Schad, J., and Smith, W. K. (2021). “Implementing interdisciplinary paradox research,” in Research in the Sociology of Organizations (Leeds: Emerald Group Holdings Ltd.), 324.

Google Scholar

Bellucci, F., and Pietarinen, A. V. (2020). Peirce on the justification of abduction. Stud. Hist. Philos. Sci. Part A 84, 12–19. doi: 10.1016/j.shpsa.2020.04.003

PubMed Abstract | Crossref Full Text | Google Scholar

Besson, E. (2018). Service business markets: relationship development in the maritime industry. J. Bus. Bus. Market. 25, 273297. doi: 10.1080/1051712X.2018.1519968

Crossref Full Text | Google Scholar

Brunsson, N. (1982). The irrationality of action and action rationality: decisions, ideologies and organizational actions. J. Manage. Stud. 19, 2944. doi: 10.1111/j.1467-6486.1982.tb00058.x

PubMed Abstract | Crossref Full Text | Google Scholar

Brynjolfsson, E., and McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York, NY; London: WW Norton and Company.

Google Scholar

Brynjolfsson, E., Mitchell, T., and Daniel, R. (2018). “What can machines learn, and what does it mean for occupations and the economy?” in AEA Papers and Proceedings 108 (American Economic Association), 4347. Available online at: https://dspace.mit.edu/handle/1721.1/120302

Google Scholar

Budhwar, P., Chowdhury, S., Wood, G., Aguinis, H., Bamber, G. J., Beltran, J. R., et al. (2023). Human resource management in the age of generative artificial intelligence: perspectives and research directions on ChatGPT. Hum. Resour. Manag. J. 33, 606–659. doi: 10.1111/1748-8583.12524

Crossref Full Text | Google Scholar

Casilli, A. A. (2021). Waiting for robots: the ever-elusive myth of automation and the global exploitation of digital labor. Sociologias 23, 112–133. doi: 10.1590/15174522-114092

Crossref Full Text | Google Scholar

Chew, A. W. (2020). Disrupting the representational limit of abductively-driven research: a problematization of the link between abduction and representational thought. Res. Educ. 109, 90108. doi: 10.1177/0034523720920670

Crossref Full Text | Google Scholar

Chowdhury, S., Dey, P., Joel-Edgar, S., Bhattacharya, S., Rodriguez-Espindola, O., Abadie, A., et al. (2023). Unlocking the value of artificial intelligence in human resource management through AI capability framework. Hum. Resour. Manag. Rev. 33:100899. doi: 10.1016/j.hrmr.2022.100899

Crossref Full Text | Google Scholar

Colbrook, M. J., Antun, V., and Hansen, A. C. (2022). The difficulty of computing stable and accurate neural networks: on the barriers of deep learning and Smale's 18th problem. Proc. Nat. Acad. Sci. U. S. A. 119:e2107151119. doi: 10.1073/pnas.2107151119

PubMed Abstract | Crossref Full Text | Google Scholar

Davenport, T. H., and Kirby, K. (2016). Au-dela de l'automatisation. Harvard Business Review, 45–53.

Google Scholar

De Swarte, T. (2013). Quand l'intelligence rationnelle abîme les compétences des salariés: une organisation high tech et la question de l'imaginaire (1980-2010). Revue Psychanal. Manag. 1, 221–236.

Google Scholar

Descartes, R. (1724/1974). Meditations Metaphysiques, 7th Edn. Paris: Presses universitaires de France.

Google Scholar

El-Gharbi, H., and Khefacha, I. (2009). Opportunisme: comportement rationnel ou irrationnel. J. Soc. Manag. 7, 73–94.

Google Scholar

Ellis, A. (1975). “The biological basis of human irrationality,” in Annual Meeting of the American Psychological Association, 83rd (Chicago, IL: American Psychological Association).

Google Scholar

Facione, P. A., Facione, N. C., Giancarlo, C. A. F., and Ferguson, N. (1999). “Le Jugement professionnel et la disposition à la pensée critique [Professional judgment and the propensity of critical thinking],” in Enseigner et Comprendre: Le Développement d'une Pensée Critique [Teaching and Understanding: The Development of Critical Thinking], eds. L Guilbert, J. Boisvert, and N. Ferguson (Quebec City, QC, Canada: Les Presses de l'Université Laval), 307–326.

Google Scholar

Farina, M., Lavazza, A., Sartori, G., and Pedrycz, W. (2024). Machine learning in human creativity: status and perspectives. AI Soc. 39, 3017–3029. doi: 10.1007/s00146-023-01836-5

Crossref Full Text | Google Scholar

Faulconbridge, J., Sarwar, A., and Spring, M. (2023). How professionals adapt to artificial intelligence: the role of intertwined boundary work. J. Manag. Stud. 62, 1991–2024. doi: 10.1111/joms.12936

Crossref Full Text | Google Scholar

Floridi, L. (2023). AI as agency without intelligence: on ChatGPT, large language models, and other generative models. Philos. Technol. 36:15. doi: 10.1007/s13347-023-00621-y

Crossref Full Text | Google Scholar

Fortes, G. (2023). “Abduction,” in The Palgrave Encyclopedia of the Possible (Cham: Springer International Publishing), 1–9. doi: 10.1007/978-3-319-98390-5_44-1

Crossref Full Text | Google Scholar

Geoffroy, F. (2012). Quand l'hypocrisie managériale protège l'organisation: les apports de Nils Brunsson 1. Revue Int. Psychosociol. 18, 301–315. doi: 10.3917/rips1.046.0301

PubMed Abstract | Crossref Full Text | Google Scholar

Gilboa, I., Postlewaite, A., and Schmeidler, D. (2012). Rationality of belief or: why savage's axioms are neither necessary nor sufficient for rationality. Synthese 187, 11–31. doi: 10.1007/s11229-011-0034-2

Crossref Full Text | Google Scholar

Gordon, J.-S., and Gunkel, D. J. (2025). Artificial intelligence and the future of work. AI Soc. 40, 1897–1903.

Google Scholar

Grace, K., Salvatier, J., Dafoe, A., Zhang, B., and Evans, O. (2017), When will AI exceed human performance? Evidence from AI experts. J. Artif. Intell. Res. 62, 1–48. doi: 10.1613/jair.1.11222

PubMed Abstract | Crossref Full Text | Google Scholar

Graf, L., König, A., Enders, A., and Hungenberg, H. (2012). Debiasing competitive irrationality: how managers can be prevented from trading off absolute for relative profit. Eur. Manag. J. 30, 386–403. doi: 10.1016/j.emj.2011.12.001

Crossref Full Text | Google Scholar

Griffiths, T. L. (2020). Understanding human intelligence through human limitations. Trends Cogn. Sci. 24, 873–883. doi: 10.1016/j.tics.2020.09.001

PubMed Abstract | Crossref Full Text | Google Scholar

Gulati, A., Lozano, M. A., Lepri, B., and Oliver, N. (2022). BIASeD: Bringing Irrationality into Automated System Design [arXiv preprint arXiv:2210.01122].

Google Scholar

Hannon, O., Ciriello, R., and Gal, U. (2024). “Just because we can, doesn't mean we should: algorithm aversion as a principled resistance,” in Hawaii International Conference on System Sciences 2024 (HICSS-57), 5. Available online at: https://aisel.aisnet.org/hicss-57/os/dark_side/5

Google Scholar

Heiland, H. (2023). The social construction of algorithms: a reassessment of algorithmic management in food delivery Gig work. N. Technol. Work Employ. 40, 1–19. doi: 10.1111/ntwe.12282

Crossref Full Text | Google Scholar

Heitz, J. M. (2014). L'action intentionnelle: entre rationalité et irrationalité: une approche davidsonienne appliquée aux organisations. Hum. Entr. 2, 69–81. doi: 10.3917/hume.317.0069

PubMed Abstract | Crossref Full Text | Google Scholar

Hitsuwari, J., Ueda, Y., Yun, W., and Nomura, M. (2023). Does human–AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry. Comput. Hum. Behav. 139:107502. doi: 10.1016/j.chb.2022.107502

Crossref Full Text | Google Scholar

Hoeyer, K., and Wadmann, S. (2020). Meaningless work: how the datafication of health reconfigures knowledge about work and erodes professional judgement. Econ. Soc. 49, 433–454. doi: 10.1080/03085147.2020.1733842

Crossref Full Text | Google Scholar

Holley, R. P. (2018). Education and training for library management. J. Libr. Adm. 58, 293–301. doi: 10.1080/01930826.2018.1436795

Crossref Full Text | Google Scholar

Jarzabkowski, P., and Kaplan, S. (2015). Strategy tools-in-use: a framework for understanding “technologies of rationality” in practice. Strat. Manag. J. 36, 537–558. doi: 10.1002/smj.2270

Crossref Full Text | Google Scholar

Jia, N., Luo, X., Fang, Z., and Liao, C. (2024). When and how artificial intelligence augments employee creativity. Acad. Manag. J. 67, 5–32. doi: 10.5465/amj.2022.0426

Crossref Full Text | Google Scholar

Johnson, L. E., and Reiman, A. J. (2007). Beginning teacher disposition: examining the moral/ethical domain. Teach. Teach. Educ. 23, 676687. doi: 10.1016/j.tate.2006.12.006

Crossref Full Text | Google Scholar

Julia, L. (2019). L'intelligence artificielle n'existe pas. Paris: Editions F1RST.

Google Scholar

Kets de Vries, M. F. (1994). The leadership mystique. Acad. Manag. Perspect. 8, 73–89. doi: 10.5465/ame.1994.9503101181

Crossref Full Text | Google Scholar

King, M. R. (2023). The future of AI in medicine: a perspective from a Chatbot. Ann. Biomed. Eng. 51, 291–295. doi: 10.1007/s10439-022-03121-w

PubMed Abstract | Crossref Full Text | Google Scholar

Kliegr, T., Bahník, Š., and Fürnkranz, J. (2021). A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. Artif. Intell. 295:103458. doi: 10.1016/j.artint.2021.103458

Crossref Full Text | Google Scholar

Koivisto, M., and Grassini, S. (2023). Best humans still outperform artificial intelligence in a creative divergent thinking task. Sci. Rep. 13:13601. doi: 10.1038/s41598-023-40858-3

PubMed Abstract | Crossref Full Text | Google Scholar

Konaka, Y., and Naoki, H. (2023). Decoding reward-curiosity conflict in decision-making from irrational behaviors. Nat. Comput. Sci. 3, 418432. doi: 10.1038/s43588-023-00439-w

PubMed Abstract | Crossref Full Text | Google Scholar

Larsson, S., and Viktorelius, M. (2024). Reducing the contingency of the world: magic, oracles, and machine-learning technology. AI Soc. 39, 183–193. doi: 10.1007/s00146-022-01394-2

Crossref Full Text | Google Scholar

Lee, H. K. (2022). Rethinking creativity: creative industries, AI and everyday creativity. Media Cult. Soc. 44, 601–612. doi: 10.1177/01634437221077009

Crossref Full Text | Google Scholar

Martignon, L., Erickson, T., and Viale, R. (2022). Transparent, simple and robust fast-and-frugal trees and their construction. Front. Hum. Dyn. 4:790033. doi: 10.3389/fhumd.2022.790033

Crossref Full Text | Google Scholar

Melé, D., and Cantón, C. G. (2014). “Views of the human being in religions and philosophies,” in Human Foundations of Management: Understanding the Homo Humanus (Palgrave Macmillan: IESE Business Collection), 68–87. doi: 10.1057/9781137462619_5

Crossref Full Text | Google Scholar

Morin, E. (1977). La méthode-La nature de la nature (Tome 1). Le Seuil. 22, 131.

Google Scholar

Morin, E. (1982). Science avec conscience. Paris : Fayard.

Google Scholar

Morin, E. (1986). La méthode. 3, La Connaissance de la Connaissance: 1. Anthropologie de la connaissance. Ed. du Seuil.

Google Scholar

Morin, E. (1996). Rationalité et rationalisation. Cahiers l'OCHA 5 (Pensée magique et alimentation aujourdhui, Claude Fischler, dir.). 5:2.

Google Scholar

Morin, E. (2001). L'humanité de l'humanité: L'identité humaine. Paris: Editions Seuil.

Google Scholar

Müller, V. C. (2024). “Philosophy of AI: a structured overview,” in Cambridge Handbook on the Law, Ethics and Policy of Artificial Intelligence, ed. N. Smüha (Cambridge: Cambridge University Press). doi: 10.1017/9781009367783.004

PubMed Abstract | Crossref Full Text | Google Scholar

Nah, F. F. H., Zheng, R., Cai, J., Siau, K., and Chen, L. (2023). Generative IA and ChatGPT: applications challenges, and AI human collaboration. J. Inform. Technol. Case Appl. Res. 25, 277-304. doi: 10.1080/15228053.2023.2233814

Crossref Full Text | Google Scholar

Nordhaus, W. (2015). “Are we approaching an economic singularity? in Information Technology and the Future of Economic Growth. Cowles Foundation Discussion Papers, No. 2021 (Cambridge, MA: National Bureau of Economic Research). doi: 10.3386/w21547

PubMed Abstract | Crossref Full Text | Google Scholar

Nyholm, S. (2024). Artificial intelligence and human enhancement: can AI technologies make us more (artificially) intelligent?. Cambridge Quart. Healthc. Ethics 33, 76–88. doi: 10.1017/S0963180123000464

PubMed Abstract | Crossref Full Text | Google Scholar

O'Doherty, D. (2020). The Leviathan of rationality: using film to develop creativity and imagination in management learning and education. Acad. Manag. Learn. Educ. 19, 366–384. doi: 10.5465/amle.2019.0197

Crossref Full Text | Google Scholar

OECD (2019). Artificial Intelligence in Society. Paris: OECD Publishing. doi: 10.1787/eedfee77-en

PubMed Abstract | Crossref Full Text | Google Scholar

Peirce, C. S. (1893-1913/1998). The Essential Peirce. Volume 2. Philosophical Writings. Indiana University Press. https://muse.jhu.edu/search?action=browse&limit=publisher_id:3Indiana University Press.

Google Scholar

Sæbø, S., and Brovold, H. (2024). On the Stochastics of Human and Artificial Creativity [arXiv preprint arXiv:2403.06996].

Google Scholar

Schmitt, C. (2021). Si Edgar Morin m?tait cont? : dsordre, dialogique et complexit. Projectics 30, 7185. doi: 10.3917/proj.030.0071

PubMed Abstract | Crossref Full Text | Google Scholar

Schön, D. A. (1992). The Reflective Practitioner - How Professionals Think in Action. London : Routledge.

Google Scholar

Shirley, D. A., and Langan-Fox, J. (1996). Intuition: areview of the literature. Psychol. Rep. 79, 563–584. doi: 10.2466/pr0.1996.79.2.563

Crossref Full Text | Google Scholar

Simon, H. A. (1979). Rational decision making in business organizations. Am. Econ. Rev. 69, 493–513.

Google Scholar

Simon, H. A. (1986). Rationality in psychology and economics. J. Business 59, S209–S224. doi: 10.1086/296363

Crossref Full Text | Google Scholar

Stanovich, K. (2011). Rationality and the Reflective Mind. New York, NY: Oxford University Press Inc., doi: 10.1093/acprof:oso/9780195341140.001.0001

Crossref Full Text | Google Scholar

Starbuck, W. H. (2004). Vita contemplativa: why I stopped trying to understand the real world. Organiz. Stud. 25, 1233–1254. doi: 10.1177/0170840604046361

Crossref Full Text | Google Scholar

Stetzka, R. M., and Winter, S. (2023). How rational is gambling? J. Econ. Surv. 37, 1432–1488. doi: 10.1111/joes.12473

Crossref Full Text | Google Scholar

Thanos, I. C. (2023). The complementary effects of rationality and intuition on strategic decision quality. Eur. Manag. J. 41, 366–374. doi: 10.1016/j.emj.2022.03.003

Crossref Full Text | Google Scholar

Todres, L. (2008). Being with that: the relevance of embodied understanding for practice. Qual. Health Res. 18, 1566–1573. doi: 10.1177/1049732308324249

PubMed Abstract | Crossref Full Text | Google Scholar

Tsoukas, H., and Knudsen, C. (2005). The Oxford Handbook of Organization Theory. New York, NY: Oxford University Press Inc., doi: 10.1093/oxfordhb/9780199275250.001.0001

Crossref Full Text | Google Scholar

Vinchon, F., Lubart, T., Bartolotta, S., Gironnay, V., Botella, M., Bourgeois-Bougrine, S., et al. (2023). Artificial intelligence and creativity: a manifesto for collaboration. J. Creat. Behav. 57, 472–484. doi: 10.1002/jocb.597

Crossref Full Text | Google Scholar

Walco, D. K., and Risen, J. L. (2017). The empirical case for acquiescing to intuition. Psychol. Sci. 28, 1807–1820. doi: 10.1177/0956797617723377

PubMed Abstract | Crossref Full Text | Google Scholar

Webb, M. W. (2020). Essays in the economics of artificial intelligence (Doctoral dissertation; Master's thesis). Stanford University; Stanford Digital Repository. Available online at: https://purl.stanford.edu/hy957wm6685

Google Scholar

Wu, Z., Ji, D., Yu, K., Zeng, X., Wu, D., and Shidujaman, M. (2021). “AI creativity and the human-AI co-creation model,” in Human-Computer Interaction. Theory, Methods and Tools: Thematic Area, HCI 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Part I 23 (Springer International Publishing), 171–190. doi: 10.1007/978-3-030-78462-1_13

PubMed Abstract | Crossref Full Text | Google Scholar

Keywords: human intelligence, artificial intelligence, rationality, irrationality, coopetition

Citation: Albert M-N and Koubaa S (2025) The coopetition of human intelligence and artificial intelligence through the prism of irrationality. Front. Hum. Dyn. 7:1554731. doi: 10.3389/fhumd.2025.1554731

Received: 08 January 2025; Accepted: 20 June 2025;
Published: 09 July 2025.

Edited by:

Paolo Bottoni, Sapienza University of Rome, Italy

Reviewed by:

Vesa K. Salminen, HAMK Häme University of Applied Sciences, Finland
Santanu Acharjee, Gauhati University, India
Stefan Trzcieliński, Poznań University of Technology, Poland

Copyright © 2025 Albert and Koubaa. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Marie-Noelle Albert, bWFyaWUtbm9lbGxlX2FsYmVydEB1cWFyLmNh

ORCID: Salah Koubaa orcid.org/0000-0002-0157-0473

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.