Skip to main content

REVIEW article

Front. Psychol., 04 January 2022
Sec. Personality and Social Psychology

The Impact of Cognitive Biases on Professionals’ Decision-Making: A Review of Four Occupational Areas

  • 1Université de Lorraine, 2LPN, Nancy, France
  • 2Psychology and Neuroscience Lab, Centre d’Économie de la Sorbonne, Université de Lorraine, CNRS UMR 8174, Paris, France

The author reviewed the research on the impact of cognitive biases on professionals’ decision-making in four occupational areas (management, finance, medicine, and law). Two main findings emerged. First, the literature reviewed shows that a dozen of cognitive biases has an impact on professionals’ decisions in these four areas, overconfidence being the most recurrent bias. Second, the level of evidence supporting the claim that cognitive biases impact professional decision-making differs across the areas covered. Research in finance relied primarily upon secondary data while research in medicine and law relied mainly upon primary data from vignette studies (both levels of evidence are found in management). Two research gaps are highlighted. The first one is a potential lack of ecological validity of the findings from vignette studies, which are numerous. The second is the neglect of individual differences in cognitive biases, which might lead to the false idea that all professionals are susceptible to biases, to the same extent. To address that issue, we suggest that reliable, specific measures of cognitive biases need to be improved or developed.

Introduction

When making judgments or decisions, people often rely on simplified information processing strategies called heuristics, which may result in systematic, predictable errors called cognitive biases (hereafter CB). For instance, people tend to overestimate the accuracy of their judgments (overconfidence bias), to perceive events as being more predictable once they have occurred (hindsight bias), or to seek and interpret evidence in ways that are partial to existing beliefs and expectations (confirmation bias). In fact, the seminal work of Kahneman and Tversky on judgment and decision-making in the 1970s opened up a vast research program on how decision-making deviates from normative standards (e.g., Tversky and Kahneman, 1974; Kahneman et al., 1982; Gilovich et al., 2002).

The “heuristics and biases” program has been remarkably fruitful, leading to unveiling dozens of CB and heuristics in decision-making (e.g., Baron, 2008, listed 53 such biases). While this research turned out to have a large impact in the academic field and beyond (Kahneman, 2011), it is worth noting that it led to some debate (Vranas, 2000; Pohl, 2017). In particular, Gigerenzer (1991, 1996) (Gigerenzer et al., 2008) outlined that Kahneman and Tversky relied upon a narrow view of normative rules (probability theory), leading them to ask participants to make artificial judgments (e.g., estimating the probability of single events) likely to result in so-called “errors.” Gigerenzer also pointed out the overemphasis on decision errors and the lack of theory behind the heuristics-and-biases approach, which eventually results in a list of cognitive errors with no theoretical framework. However, there have been several attempts to overcome this shortcoming, such as the reframing of the heuristics-and-biases literature in terms of the concept of attribute substitution (Kahneman and Frederick, 2002) and the various taxonomies of CB advanced based on dual-process models (e.g., Stanovich et al., 2008).

While early research on CB was conducted on lay participants to investigate decision-making in general, there has been a large interest in how such biases may impede professional decision-making in areas, such as management (e.g., Maule and Hodgkinson, 2002), finance (e.g., Baker and Nofsinger, 2002), medicine (e.g., Blumenthal-Barby and Krieger, 2015), and law (e.g., Rachlinski, 2018). Consider, for example, the framing effect, when making risky decisions, people prefer sure gains over more risky ones, whereas they prefer risky losses over sure ones (Kahneman and Tversky, 1979). Therefore, framing a problem in terms of gains versus losses can significantly impact decision-making. In most lawsuits for instance, plaintiffs choose between a sure gain (the settlement payment) and a potential larger gain (in the case of further litigation) while defendants choose between a sure loss (the settlement payment) and a potential larger loss (in the case of further litigation). In fact, when considering whether the parties should settle the case, judges evaluating the case from the plaintiff’s perspective are more likely to recommend settlement than those evaluating the case from the defendant’s perspective (Guthrie et al., 2001). Likewise, when asking to rate the effectiveness of a drug, presenting the results of a hypothetical clinical trial in terms of absolute survival (gain), absolute mortality (loss), or relative mortality reduction (gain) influences the ratings of doctors (Perneger and Agoritsas, 2011).

For the sake of convenience, we list below the common definition of the main CB considered in this review.

Anchoring Bias is the tendency to adjust our judgments (especially numerical judgments) toward the first piece of information (Tversky and Kahneman, 1974).

Availability bias is the tendency by which a person evaluates the probability of events by the ease with which relevant instances come to mind (Tversky and Kahneman, 1973).

Confirmation bias is the tendency to search for, to interpret, to favor, and to recall information that confirms or supports one’s prior personal beliefs (Nickerson, 1998).

Disposition effect is the tendency among investors to sell stock market winners too soon and hold on to losers too long (Shefrin and Statman, 1985). This tendency is typically related to loss aversion (Kahneman and Tversky, 1979).

Hindsight bias is a propensity to perceive events as being more predictable, once they have occurred (Fischhoff, 1975).

Omission bias is the preference for harm caused by omissions over equal or lesser harm caused by acts (Baron and Ritov, 2004).

Outcome bias is the tendency to judge the quality of a decision based on the information about the outcome of that decision. These judgments are erroneous in respect to normative assumption that “information that is available only after decision is made is irrelevant to the quality of the decision” (Baron and Hershey, 1988, p. 569).

Overconfidence bias is a common inclination of people to overestimate their own abilities to successfully perform a particular task (Brenner et al., 1996).

Relative risk bias is a stronger inclination to choose a particular treatment when presented with the relative risk than when presented with the same information described in terms of the absolute risk (Forrow et al., 1992).

Susceptibility to framing is the tendency for people to react differently to a single choice depending on whether it is presented as a loss or a gain (Tversky and Kahneman, 1981).

In the present paper, we review the research on the impact of CB on professional decision-making in four areas: management, finance, medicine, and law. Those applied areas were selected as they have led to the highest number of publications on this topic so far (see “Materials and Methods”). This study aims to address the following research questions:

1. Assess the claim that CB impact professionals’ decision-making

2. Assess the level of evidence reported in the empirical studies

3. Identify the research gaps

We take a narrative approach to synthesizing the key publications and representative empirical studies to answer these research questions. To the best of our knowledge, this study is the first literature review on this research topic covering multiple areas together. This review is narrative, as opposed to a systematic review, which is one of its limitations. However, it aims to be useful both to researchers and professionals working in the areas covered.

The present paper is structured as follows. The Methods section provides details about the methodology used to conduct the literature review. In the following sections, we review the key findings in each of the four occupational areas covered. Finally, in the Discussion section, we answer the three research questions addressed in light of the findings reviewed.

Materials and Methods

We conducted a systematic literature search using the Web of Science (WoS) database with the search terms “cognitive biases AND decision making.” The search criteria included research articles, review articles, or book chapters with no restriction regarding the time period. We focused on the WoS database as the “Web of Science Categories” filter would offer a practical mean to select the applied areas covered. Admittedly, the results of our review might have been different had we covered more databases; however, as our strategy was to review the key publications and representative empirical studies in each of the areas selected, we reasoned that virtually every database would have led to these records.

The PRISMA flowchart in Figure 1 illustrates the process of article search and selection in this study. The WoS search led to a total of 3,169 records. Before screening, we used the “Web of Science Categories” filter to identify and select the four applied areas with the highest number of publications. Those areas were management (n = 436), which merged the categories “Management” (n = 260) and “Business” (n = 176); medicine (n = 517), which merged the categories “Psychiatry” (n = 261), “Health Care Sciences Services” (n = 112), “Medicine General Internal” (n = 94), “Radiology Nuclear Medicine Medical Imaging” (n = 22), “Critical Care Medicine” (n = 14), and “Emergency Medicine” (n = 14); and law (n = 110) and finance (n = 70). Noteworthy, while the category “Psychology Applied” was associated with a significant number of publications (n = 146), a closer examination revealed that the majority of them was related to other applied areas (e.g., management, medicine, law, and ergonomics). Accordingly, this category was not included in the review. The abstracts selected were reviewed according to two inclusion criteria: (1) the article had a clear focus on cognitive biases and decision-making (e.g., not on implicit biases); (2) the article reported a review (narrative or systematic) on the topic or a representative empirical study. This screening led to a selection of 79 eligible articles, which were all included in the review.

FIGURE 1
www.frontiersin.org

Figure 1. PRISMA flowchart of article search and collection.

Management

The life of any organization is made of crucial decisions. According to Eisenhardt and Zbaracki (1992, p. 17), strategic decisions are “those infrequent decisions made by the top leaders of an organization that critically affect organizational health and survival.” For instance, when Disney decided to locate Euro Disney in Paris or when Quaker decided to acquire Snapple, these companies took strategic decisions.

A defining feature of strategic decisions is their lack of structure. While other areas of management deal with recurring, routinized, and operationally specific decisions, strategic issues and problems tend to be relatively ambiguous, complex, and surrounded by risk and uncertainty (Hodgkinson, 2001). How do managers actually deal with such decisions? Much of early research on strategic decision-making was based on a neoclassical framework with the idea that strategists in organizations are rational actors. However, the seminal work of Kahneman and Tversky in the 1970s questioned this assumption (Hardman and Harries, 2002). In fact, the very notion of “bounded rationality” emerged in the study of organizations (March and Simon, 1958). One might argue that the issue of individual biases in strategic decision-making is of limited relevance as strategic decisions are the product of organizations rather than individuals within the context of a wider sociopolitical arena (Mintzberg, 1983; Johnson, 1987). However, individual (micro) factors might help explain organizational (macro) phenomena, an idea promoted by behavioral strategy (Powell et al., 2011).

The “heuristics and biases” program revived the interest for bounded rationality in management with the idea that decision-makers may use heuristics to cope with complex and uncertain environments, which in turn may result in inappropriate or suboptimal decisions (e.g., Barnes, 1984; Bazerman, 1998). Indeed, it is relatively easy to see how biases, such as availability, hindsight, or overconfidence, might play out in the strategic decision-making process. For instance, it may seem difficult in hindsight to understand why IBM and Kodak failed to see the potential that Haloid saw (which led to the Xerox company). The hindsight bias can actually lead managers to distort their evaluations of initial decisions and their predictions (Bukszar and Connolly, 1988). Likewise, practicing auditors of major accounting firms are sensitive to anchoring effects (Joyce and Biddle, 1981) and prospective entrepreneurs tend to neglect base rates for business failures (Moore et al., 2007).

To our knowledge, no systematic review of empirical research on the impact of heuristics and CB on strategic decision-making has been published to date. Whereas the idea that CB could affect strategic decisions is widely recognized, the corresponding empirical evidence is quite weak. Most research on this topic consists in narrative papers relying upon documentary sources and anecdotal evidence (e.g., Duhaime and Schwenk, 1985; Lyles and Thomas, 1988; Huff and Schwenk, 1990; Zajac and Bazerman, 1991; Bazerman and Moore, 2008). In fact, the typical paper describes a few CB and provides for each one examples of how a particular bias can lead to poor strategic decisions (see Barnes, 1984, for a representative example). While the examples provided are often compelling, such research faces severe methodological limitations.

The work of Schwenk (1984, 1985) is representative of that type of research. This author identified three different stages of the strategic decision process (goal formulation and problem identification, strategic alternatives generation, evaluation of alternatives, and selection of the best one) and a set of heuristics and biases that might affect decisions at each stage. Schwenk also provided for each bias an illustrative example of how the bias may impede the overall quality of strategic decisions. For example, the representativeness heuristics may affect the stage of evaluation and selection of the alternatives. To illustrate this, Schwenk mentioned the head of an American retail organization (Montgomery Ward) who held a strong belief that there would be a depression at the end of the Second World War as was the case after World War I. Based on this belief, this executive decided not to allow his company to expand to meet competition from his rival (Sears), which led to a permanent loss of market share to Sears. Schwenk (1988) listed ten heuristics and biases of potential key significance in the context of strategic decision-making (availability, selective perception, illusory correlation, conservatism, law of small numbers, regression bias, wishful thinking, illusion of control, logical reconstruction, and hindsight bias).

In a similar vein, Das and Teng (1999) proposed a framework to explore the presence of four basic types of CB (prior hypotheses and focusing on limited targets, exposure to limited alternatives, insensitivity to outcome probabilities, and illusion of manageability) under five different modes of decision-making (rational, avoidance, logical incrementalist, political, and garbage can). They proposed that not all basic types of biases are robust across all kinds of decision processes; rather, their selective presence is contingent upon the specific processes that decision makers engage in. For instance, the garbage can mode (Cohen et al., 1972) depicts decision-making processes as organized anarchies, in which a decision is largely dependent on chance and timing. In this kind of process, decision makers do not know their objectives ex ante, but merely look around for decisions to make. Das and Teng (1999) hypothesized that managers under the garbage can mode will be exposed to limited alternatives and insensitive to outcome probabilities. On the contrary, managers under the rational mode would be exposed to prior hypotheses and illusion of manageability. This framework, however, is not supported by rigorous empirical evidence.

It is not difficult to list examples of poor strategic decisions that can be readily interpreted – in hindsight – as the result of heuristics and biases. However, the claim that CB influence strategic decisions requires to be tested more directly through laboratory research and experimental studies (Maule and Hodgkinson, 2002). It is worth noting that such research is scarce, probably because of its lack of ecological validity, an issue of primary importance in the field of management research (Schwenk, 1982). Still, two CB in particular have been studied quantitatively, the framing effect and CEO overconfidence.

Hodgkinson et al. (1999) used an experimental setting to investigate the effect of framing on strategic decisions. Following the “Asian Disease” problem (Tversky and Kahneman, 1981), they presented subjects (undergraduate management students) with a 500-word case vignette giving a brief history of a company that manufactured and distributed fast paint-drying systems. A positive and a negative frame were used and participants were asked to adopt the role of a board member facing a major strategic decision and to indicate which of two alternative options they would choose. The positive frame emphasized gains from a reference point of no profit, whereas the negative frame highlighted losses from a reference point where the target profit is achieved (£3 million). In addition, participants were either asked to choose between the presented options directly or to represent the ways in which they thought about the problem in the form of a causal map prior to making their choice. It turned out that when participants made their decisions directly, a massive framing effect was found (45.5% of participants chose the risk-averse option in the positive frame versus 9% in the negative frame). However, no framing effect was observed when participants were asked to draw a causal map before making their choice (36.4% of the participants opted for the risk-averse option in both versions). Interestingly, Hodgkinson et al. reported the same findings on experienced participants (senior managers in a banking organization).

Another CB that led to a large amount of empirical research in strategic management is CEO overconfidence. Overconfidence has various aspects: overprecision, overestimation, and overplacement (Moore and Schatz, 2017). Regarding overprecision, Ben-David et al. (2013) investigated the accuracy of stock market predictions made by senior finance executives (the majority of them being CFOs). The data were collected in 40 quarterly surveys conducted between June 2001 and March 2011. Ben-David et al. asked participants to predict one- and 10-year market-wide stock returns and to provide an 80% confidence interval for their predictions (“Over the next year, I expect the annual SandP 500 return will be: There is a 1-in-10 chance the actual return will be less than ___%; I expect the return to be: ___%; There is a 1-in-10 chance the actual return will be greater than ___%.”). It turned out that the CFOs were severely miscalibrated as: the realized one-year SandP 500 returns fall within their 80% confidence intervals only 36.3% of the time. Even during the least volatile quarters in the sample, only 59% of realized returns fall within the 80% confidence intervals provided. The comparison of the size of the CFOs’ confidence intervals to the distribution of historical one-year returns revealed that their confidence intervals were too narrow. Indeed, CFOs provide an average confidence interval of 14.5%, whereas the difference between the 10th and 90th return percentiles from the realized distribution of the one-year SandP 500 returns is 42.2% (only 3.4% of CFOs provided confidence intervals wider than 42.2%).

Managers also overestimate their abilities, particularly with regard to the illusion of control. In their review on risk perception among managers, March and Shapira (1987) reported that most managers (1) consider that they take risks wisely and that they are less risk-averse than their colleagues, (2) perceive risk as largely controllable, and (3) attribute this controllability to skills and information.

Finally, executives also appear to be overconfident with regard to overplacement. Malmendier and Tate (2005) assessed CEO overconfidence through revealed preferences, examining how they exercised their options. A CEO persistently exercising options later than suggested by the benchmark reveals his belief in his ability to keep the company’s stock price rising and that he or she wants to profit from expected price increases by holding the options. Using panel data on personal portfolio and corporate investment decisions of Forbes 500 CEOs, Malmendier and Tate reported that most of CEO excessively hold company stock options, thereby failing to reduce their personal exposure to company-specific risk. CEO overconfidence is also believed to be involved in merger decisions. As overconfident CEOs overestimate their ability to generate returns, they are supposed to overpay for target companies and undertake value-destroying mergers. Using two measures of CEO overconfidence (CEOs’ personal over-investment in their company and their press portrayal), Malmendier and Tate (2008) provided support for that hypothesis: the odds of making an acquisition are 65% higher if the CEO is classified as overconfident.

Finance

The case of CB in finance is special. In the 1980s, CB were invoked to account for observations on markets in disagreement with the predictions of standard finance. This paradigm relies upon expected utility theory, assuming that investors make rational decisions under uncertainty (i.e., maximizing utility). Standard finance produced core theoretical concepts, such as arbitrage, portfolio theory, capital asset pricing theory, and efficient market hypothesis, all assuming rational investors. In the 1970s, some observations on financial markets relative to trading behavior, volatility, market returns, and portfolio selection turned out to be inconsistent with the framework of standard finance (“anomalies”). Psychological biases (micro level) were invoked as theoretical explanations of these market anomalies (macro level), launching the field of behavioral finance (Shiller, 2003). In particular, behavioral finance capitalized on prospect theory (Kahneman and Tversky, 1979), a more realistic view of decision-making under uncertainty that expected utility theory. A prime example is how (myopic) loss aversion – a key concept of prospect theory – can account for the equity premium puzzle (i.e., the excessively high difference between equity returns and the return of Treasury bills; Benartzi and Thaler, 1995).

Here, we focus on investment decision-making in individual investors (Shefrin, 2000; Baker and Nofsinger, 2010; Baker and Ricciardi, 2014) and how CB may impede such decisions (see Baker and Nofsinger, 2002, and Kumar and Goyal, 2015, for reviews).1 In fact, financial economists have distinguished between two types of investors in the market, arbitrageurs and noise traders. While the latter is assumed to be fully rational, noise traders are investors prone to CB (De Long et al.,1990), which results in under-diversified portfolios. Various CB have been invoked to account for poor individual investment decisions, resulting in suboptimal portfolio management. For example, investors tend to favor stocks that performed well during the past 3–5 years (“winners”) over stocks that performed poorly (“losers”), neglecting that because of regression to the mean, the losers will tend to outperform the winners over the next years (actually by 30%; De Bondt and Thaler, 1985). Investors may exhibit a home bias (an instance of familiarity bias), a tendency to invest the majority of their portfolio in domestic equities rather than diversifying it into foreign equities (Coval and Moskowitz, 1999). Investors may also fall prey to herding, a tendency to follow blindly what other investors do (Grinblatt et al., 1995).

Two CB have been particularly studied in investment decision-making: overconfidence and disposition effect (see the systematic review of Kumar and Goyal, 2015). On the one hand, investors are usually overconfident with regard to the precision of their forecasts. When asked to predict the future return or price of a stock, investors report confidence intervals that are too narrow compared to the actual variability of prices (e.g., De Bondt, 1998). Investors also overestimate their ability to beat the market. Baker and Nofsinger (2002) reported a finding of a Gallup survey in 2001 revealing that on average, investors estimated that the stock market return during the next 12 months would be 10.3% while estimating that their portfolio return would be 11.7%. Barber and Odean (2001) reported evidence that overconfidence in investors is related to gender. Based on a sample of 35,000 individual accounts over a six-year period, their findings showed that males exhibit more overconfidence regarding their investing abilities and also trade more often than females. Overconfidence in investors makes them more prone to take high risks (Chuang and Lee, 2006) and trade too much (Odean, 1999; Statman et al., 2006; Glaser and Weber, 2007), which results in poor financial performance (consequent transaction costs and losses). For instance, trading turnover and portfolio returns are negatively correlated: of 66,465 households with accounts at a large discount broker during 1991–1996, households that trade most had an annual return of 11.4% while the average annual return was 16.4% (Barber and Odean, 2000).

On the other hand, the disposition effect is the tendency by which investors tend to sell winning stocks too early while holding on to losing positions for too long (Shefrin and Statman, 1985). Based on trading records for 10,000 accounts at a large discount brokerage house, Odean (1998) reported that on average, winning investments are 50% more likely to be sold than losing investment (similar results were obtained in other countries, such as France; Boolell-Gunesh et al., 2009). The disposition effect originates in loss aversion described by prospect theory (Kahneman and Tversky, 1979).

Medicine

The idea that cognitive failures are a primary source of medical errors has become prevalent in the medical literature (e.g., Detmer et al., 1978; Dawson and Arkes, 1987; Schmitt and Elstein, 1988; Elstein, 1999; Croskerry, 2003; Klein, 2005). In fact, emergency medicine has been described as a “natural laboratory of error” (Bogner, 1994). Among medical errors, diagnostic errors have received particular attention (Graber, 2013). Indeed, there is increasing evidence that mental shortcuts during information processing contribute to diagnostic errors (e.g., Schnapp et al., 2018).

It is not difficult to see how CB may impact medical decisions. Blumenthal-Barby and Krieger (2015) provided the following examples. A parent might refuse to vaccinate her child after she sees a media report of a child who developed autism after being vaccinated (availability bias). A patient with atrial fibrillation might refuse to take warfarin because she is concerned about causing a hemorrhagic stroke despite greater risk of having an ischemic stroke if she does not take warfarin (omission bias). Indeed, early papers on this topic were primarily narrative reviews suggesting a possible impact of CB on medical decision-making. These papers follow the same logic: they first provide a general description of a couple of CB and then describe how these shortcuts can lead physicians to make poor decisions, such as wrong diagnoses (e.g., Dawson and Arkes, 1987; Elstein, 1999; Redelmeier, 2005). But narrative reviews provide limited evidence. As Zwaan et al. (2017, p.105) outlined, “While these papers make a formidable argument that the biases described in the literature might cause a diagnostic error, empirical evidence that any of these biases actually causes diagnostic errors is sparse.”

On the other hand, studies that investigated the actual impact of CB on medical decisions are mainly experimental studies using written cases (hypothetical vignettes) designed to elicit a particular bias. A typical example of vignette study is that of Mamede et al. (2010) on the effect of availability bias on diagnostic accuracy. In a first phase, participants (first-year and second-year internal medicine residents) were provided with 6 different cases and they were asked to rate the likelihood that the indicated diagnosis was correct (all cases were based on real patients with a confirmed diagnosis). Then, participants were asked to diagnose 8 new cases as quickly as possible, that is, relying on non-analytical reasoning. Half of those new cases were similar to the cases encountered in phase 1, so that the availability bias was expected to reduce diagnostic accuracy for those four cases. Second-year residents had actually lower diagnostic accuracy on cases similar to those encountered in phase 1 as compared to other cases, as they provided the phase 1 diagnosis more frequently for phase 2 cases they had previously encountered than for those they had not.

While vignette-based studies are the most frequent, researchers in this area have used diverse strategies (Blumenthal-Barby and Krieger, 2015). For instance, Crowley et al. (2013) developed a computer-based method to detect heuristics and biases in diagnostic reasoning as pathologists examine virtual slide cases. Each heuristic or bias is defined as a particular sequence of hypothesis, findings, and diagnosis formulation in the diagnostic reasoning interface (e.g., availability bias is considered to occur if in a sequence of three cases where the third case has a different diagnosis than the two previous ones, the participant makes an incorrect diagnosis in the third case such that the diagnosis is identical to the correct diagnosis in the two immediately preceding cases). Such a procedure allows for examining the relationships between heuristics and biases, and diagnostic errors.

Another methodology consists in reviewing instances where errors occurred, to which CB presumably contributed (e.g., Graber et al., 2005). However, studies following this methodology are vulnerable to hindsight bias: since reviewers are aware that an error was committed, they are prone to identify biases ex post (Wears and Nemeth, 2007). The fact that bias can be in the eye of the beholder has been supported by Zwaan et al. (2017) who asked 37 physicians to read eight cases and list which CB were present from a list provided. In half the cases, the outcome implied a correct diagnosis; in the other half, it implied an incorrect diagnosis. Physicians identified more biases when the case outcome implied an incorrect diagnosis (3.45 on average) than when it implied a correct one (1.75 on average).

To date, two systematic reviews have been published on the impact of CB on medical decision-making. Reviewing a total of 213 studies, Blumenthal-Barby and Krieger (2015) reported the following findings: (1) 77% of the studies (N = 164) were based on hypothetical vignettes; (2) 34% of studies (N = 73) investigated medical personnel; (3) 82% of the studies (N = 175) were conducted with representative populations; (4) 68% of the studies (N = 145 studies) confirmed a bias or heuristic in the study population; (5) the most studied CB are loss/gain framing bias (72 studies, 24.08%), omission bias (18 studies, 6.02%), relative risk bias (29 studies, 9.70%), and availability bias (22 studies, 7.36%); (6) the results regarding loss/gain framing bias are mixed with 39% of studies (N = 28) confirming an effect, 39% (N = 28) confirming an effect only in a subpopulation, and 22% (N = 16) disconfirming any effect; (7) 25 of 29 studies (86%) supported the impact of relative risk bias on medical decisions; and (8) 14 of 18 studies (78%) supported the impact of omission bias on medical decisions.

Saposnik et al. (2016) conducted a similar review but including only 20 studies. These authors reported that as: (1) 60% of the studies (N = 12) targeted CB in diagnostic tasks; (2) framing effect (N = 5) and overconfidence (N = 5) were the most common CB while tolerance to risk or ambiguity was the most commonly studied personality trait (N = 5); and (3) given that the large majority of the studies (85%) targeted only one or two biases, the true prevalence of CB influencing medical decisions remains unknown. Moreover, there was a wide variability in the reported prevalence of CB. For example, when analyzing the three most comprehensive studies that accounted for several CB (Ogdie et al., 2012; Stiegler and Ruskin, 2012; Crowley et al., 2013), it turned out that the availability bias ranged from 7.8 to 75.6% and anchoring bias from 5.9 to 87.8%; (4) the presence of CB was associated with diagnostic inaccuracies in 36.5 to 77% of case-scenarios. Physicians’ overconfidence, anchoring effect, and information or availability bias may be associated with diagnostic inaccuracies; (5) only seven studies (35%) provided information to evaluate the association between physicians’ CB and therapeutic or management errors. Five of these studies (71.4%) showed an association between CB (anchoring, information bias, overconfidence, premature closure, representativeness, and confirmation bias) and therapeutic or management errors.

Justice

Based on the legal realism’ premise that “judges are human,” the recent years have seen a growing interest for judicial decision-making (e.g., Klein and Mitchell, 2010; Dhami and Belton, 2017; Rachlinski, 2018). This topic covers issues, such as cognitive models of judicial decision-making (e.g., the story model), the impact of extralegal factors on decisions, prejudice (e.g., gender bias and racial bias), moral judgments, group decision-making, or the comparison of lay and professional judges. It is worth noting that most research on judicial decision-making has focused on how jurors decide cases, relying on jury simulations (MacCoun, 1989). Here, we focus on how professional judges might be prone to CB. One might easily consider how CB could hamper judicial decisions. In a narrative fashion, Peer and Gamliel (2013) reviewed how such biases could intervene during the hearing process (confirmation bias and hindsight bias), ruling (inability to ignore inadmissible evidence), and sentencing (anchoring effects). In fact, research suggests that judges, prosecutors, and other professionals in the legal field might rely on heuristics to produce their decisions, which leaves room for CB (e.g., Guthrie et al., 2007; Helm et al., 2016; Rachlinski and Wistrich, 2017).2

Researchers investigating judges’ decision-making have mainly relied upon archival studies (document analyses of court records) and experimental studies in which judges are asked to decide on hypothetical cases. In archival studies, researchers examine if judges’ decisions in actual cases exhibit features of irrationality. For instance, Ebbesen and Konecni (1975) investigated which information felony court judges considered when deciding the amount of bail to set. When presented with fictitious cases, the judges’ decisions were influenced by relevant information, such as prior criminal record, but their actual bail decisions relied almost exclusively on prosecutorial recommendations. That is, judges seem to be (too) heavily affected by prosecutors’ recommendations. Another example of archival study is the infamous research of Danziger et al. (2011) who highlighted a cycle in repeated judicial rulings: judges are initially lenient, then progressively rule more in favor of the status quo over time, and become lenient again after a food break. This would suggest that psychological factors, such as mental fatigue, could influence legal decisions (but see Weinshall-Margel and Shapard, 2011). Archival studies, however, are limited by the difficulty to control for unobserved variables.

On the other hand, vignette studies consist in presenting judges with hypothetical scenarios simulating real legal cases. As in the medical field, researchers have primarily relied on such studies. A representative study is that of Guthrie et al. (2001) who administered a survey to 167 federal magistrate judges in order to assess the impact of five CB (anchoring, framing, hindsight bias, inverse fallacy, and egocentric bias) on their decisions regarding litigation problems (see Guthrie et al., 2002, for a summary of the research). Using materials adapting classic cognitive problems into legal ones, Guthrie et al. (2001) reported that judges fell prey to these biases but to various extent. For instance, in order to assess whether judges were susceptible to hindsight bias, Guthrie et al. (2001) presented them with a hypothetical case in which the plaintiff appealed the district court’s decision and asked them to indicate which of three possible outcomes of the appeal was most likely to have occurred. Crucially, they also provided them with the actual outcome of the court of appeals. The outcome significantly influenced judges’ assessments: those informed of a particular outcome were more likely to have identified that outcome as the most likely to have occurred.

In particular, numerous studies have investigated the impact of anchoring effects on judicial decisions (see Bystranowski et al., 2021, for a recent meta-analysis). Judges and jurors are often required to translate qualitative judgments into quantitative decisions (Hans and Reyna, 2011; Rachlinski et al., 2015). While their qualitative judgments on matters, such as the severity of the plaintiff’s injury or the appropriate severity of punishment, show a high degree of consistence and predictability (Wissler et al., 1999), a great amount of variability appears (especially for non-economic and punitive damages) when these qualitative judgments are translated into numbers (e.g., civil damage awards and criminal sentences; Hart et al., 1997; Diamond et al., 1998). This might be explained by the fact that numerical assessments can be prone to anchoring. Facing uncertainty about the amount to determine, judges and especially juries (due to their lack of experience and information about standard practice) tend to rely on any numerical point of reference and make their judgment through adjustments from that number. As these adjustments are often insufficient, the judgments are biased toward the anchor (see Kahneman et al., 1998, for a model describing how individual jurors set punitive damages and the role of anchoring in that process).

Accordingly, numerical values, such as a damage cap (e.g., Hinsz and Indahl, 1995; Robbennolt and Studebaker, 1999), the amount of damages claimed by the plaintiff (Chapman and Bornstein, 1996), the amount of economic damage (Eisenberg et al., 1997, 2006), the sentence imposed in the preceding case, a sentence urged by the prosecutor, or a sentence recommended by a probation officer, might act as anchors in the courtroom, moving the judges’ decisions toward them. Guthrie et al. (2001) reported that in a personal injury suit, an irrelevant factor, such as a number in a pre-trial motion (used to determine whether the damages met the minimum limit for federal court), could act as an anchor. They presented judges with a description of a serious personal injury suit in which only damages were at issue and asked them to estimate how much they would award the plaintiff in compensatory damages. Prior to this estimation, half of the judges were asked to rule on a pre-trial motion filed by the defendant to have the case dismissed for failing to meet the jurisdictional minimum in a diversity suit ($75,000). It turned out that the judges who were asked only to determine the damage award provided an average estimate of $1,249,000 while the judges who first ruled on the motion provided an average estimate of $882,000.

Enough and Mussweiler (2001) conducted a series of research on how recommendations anchor judicial decisions, even when they are misleading. In their 2001 paper, they showed that sentencing decisions tend to follow the sentence demanded by the prosecutor. When told that the prosecutor recommended a sentence of 34 months, criminal trial judges recommended on average 8 months longer in prison (M = 24.41 months) than when told that the sentence should be 12 months (M = 17.64) for the same crime. This anchoring effect was independent of the perceived relevance of the sentencing demand, and judges’ experience. Englich et al. (2006) reported that anchoring even occurs when the sentencing demand is determined randomly (the result of a dice throw). Interestingly, Englich et al. (2005) found that the defense’s sentencing recommendation is actually anchored on the prosecutor’s demand, so that the former mediates the impact of the latter on the judge’s decision. Therefore, while it is supposed to be at their advantage, the fact that defense attorneys present their sentencing recommendation after the prosecution might be a hidden disadvantage for the defense.

Along with anchoring, the impact of hindsight bias in the courtroom has been also well documented, mainly in liability cases (Harley, 2007; Oeberst and Goeckenjan, 2016). When determining liability or negligence, judges and juries must assess whether the defendant is liable for a negative outcome (damage or injury). The difficulty is that jurors accomplish this task in retrospect: having knowledge of the outcome, jurors tend to perceive it as foreseeable and accordingly rate highly the negligence or liability of the defendant (Rachlinski et al., 2011). To avoid this bias, the law requires jurors to ignore the outcome information while evaluating the extent to which it should have been foreseen by the defendant. However, research suggests that jurors tend to fall prey to hindsight bias as much as lay persons. When evaluating the precautions took by a municipality to protect a riparian property owner from flood damage, participants assessing the situation in foresight concluded that a flood was too unlikely to justify further precautions. However, participants assessing the situation in hindsight considered that such a decision was negligent and also gave higher estimates for the probability of the disaster occurring (Kamin and Rachlinski, 1995).

Outcome information has been shown to affect jurors’ decisions about punitive damage awards (Hastie et al., 1999) and their decisions about the legality of a search (Casper et al., 1989). In addition, more severe outcomes tend to produce a larger hindsight bias, a result particularly stressed in medical malpractice litigation (LaBine and LaBine, 1996). While the assessment of negligence of the accused physician should be based on his course of action regardless of the outcome, jurors are highly influenced by the severity of a negative medical outcome when determining negligence in medical malpractice cases (Berlin and Hendrix, 1998). Cheney et al. (1989) reviewed 1,004 cases of anesthesia-related negligence and reported that the court had imposed liability on the defendant in over 40 percent of the cases, even though the physician acted appropriately.

There is also significant evidence that confirmation bias (Nickerson, 1998) may impact professional judges’ decisions. In the legal field, confirmation bias has been primarily studied with regard to criminal investigations (Findley and Scott, 2006). Once they become convinced that the suspect is guilty, professionals involved in criminal proceedings (e.g., police officers and judges) may engage in guilt-confirming investigation endeavors (or tunnel vision) by which they undermine alternative scenarios in which the suspect is actually innocent. Several studies reported evidence of confirmation bias in criminal cases. For instance, O’Brien (2009) found that participants (College students) who articulated a hypothesis regarding the suspect early in their review of a mock police file showed bias in seeking and interpreting evidence to favor that hypothesis, thereby demonstrating a case-building mentality against a chosen suspect. Similarly, Lidén et al. (2019) showed that judges’ detentions of suspects trigger a confirmation bias that influences their assessment of guilt and that this bias is affected by who decided about detention. In fact, judges perceived the detained defendants’ statements as less trustworthy and were also more likely to convict when they themselves had previously detained the suspect as compared to when a colleague had decided to detain.3

Table 1 provides a summary of the main CB in the four occupational areas reviewed and the corresponding evidence.

TABLE 1
www.frontiersin.org

Table 1. Summary of the main cognitive biases studied in the fields of management, finance, medicine, and law, and corresponding evidence.

Discussion

The goal of the present paper was to provide an overview of the impact of CB on professional decision-making in various occupational areas (management, finance, medicine, and law). In all of them, there has been tremendous interest in that issue as revealed by a vast amount of research. Our review provided significant answers to the three research questions addressed.

First, the literature reviewed shows that, overall, professionals in the four areas covered are prone to CB. In management, there is evidence that risky-choice (loss/gain) framing effects and overconfidence (among CEOs) impact decision-making. In finance, there is strong evidence that overconfidence and the disposition effect (a consequence of loss aversion) impact individual investors’ decision-making. Regarding medical decision-making, the systematic review of Blumenthal-Barby and Krieger (2015) revealed that (1) 90% of the 213 studies reviewed confirmed a bias or heuristic in the study population or in a subpopulation of the study; (2) there is strong evidence that omission bias, relative risk bias, and availability bias have an impact on medical decisions, and mixed evidence for the risky-choice framing effect. On the other hand, the systematic review of Saposnik et al. (2016) – based on 20 studies only – reported that physicians’ overconfidence, anchoring, and availability bias were associated with diagnostic errors. Finally, the effects of anchoring, hindsight bias, and confirmation bias on judicial decision-making are well documented. Overall, overconfidence appears as the most recurrent CB over the four areas covered.

Second, the level of evidence supporting the claim that CB impact professionals’ decision-making differs across the four areas covered. In medicine and law, this issue has been primarily evidenced in vignettes studies. Such primary data provide a relevant assessment of CB in decision-making but they face the issue of ecological validity (see below). Accordingly, a mid-level of evidence can be assigned to these findings. On the other hand, following the method of revealed preference by which the preferences of individuals are uncovered through the analysis of their choices in real-life settings, the impact of CB on financial decision-making has been evidenced through secondary data (e.g., trading records), indicating a higher level of evidence. In management, both levels of evidence are found (framing effects were demonstrated in vignette studies while CEO overconfidence was evidenced through secondary data).

A practical implication of these findings is the need for professionals to consider concrete, practical ways of mitigating the impact of CB on decision-making. In finance, this issue has been tackled with programs that aimed to improve financial literacy (Lusardi and Mitchell, 2014). In medicine, debiasing has been considered as a way to reduce the effects of CB (Graber et al., 2002, 2012; Croskerry, 2003; Croskerry et al., 2013). In fact, recent research has reported evidence that the debiasing of decisions can be effective (Morewedge et al., 2015; Sellier et al., 2019). However, a preliminary step to considering practical means of mitigating the impact of CB is to acknowledge this diagnosis. In fact, professionals are reluctant to accept the idea that their decisions may be biased (e.g., Kukucka et al., 2017). Judges, for instance, tend to dismiss the evidence showing the impact of CB on judicial decisions, arguing that most studies did not investigate decisions on real cases (Dhami and Belton, 2017).

Thirdly, our review highlights two major research gaps. The first one is a potential lack of ecological validity of the findings from vignette studies, which are numerous (Blumenthal-Barby and Krieger, 2015). Consider for instance a study designed to test whether sentencing decisions could be anchored by certain information, such as the sentence demanded by the prosecutor (Enough and Mussweiler, 2001). A typical study consists in presenting judges with a vignette describing a hypothetical criminal case and asking them to sentence the defendant (e.g., Rachlinski et al., 2015). If a statistically significant difference is observed between the different anchor conditions, it is concluded that anchoring impacts judges’ sentencing decisions. Does such a finding mean that judges’ sentencing decisions in real cases are affected by anchoring too? Likewise, it has been reported that 90% of judges solve the Wason task incorrectly (Rachlinski et al., 2013) but this does not imply per se that confirmation bias impedes judges’ decisions in their regular work. Addressing that issue requires to use more ecological settings, such as mock trials in the case of judicial decision-making (Diamond, 1997).

The second research gap is the neglect of individual differences in CB. This limit was found in the four areas covered. Individual differences have been neglected in decision-making research in general (Stanovich et al., 2011; Mohammed and Schwall, 2012). Indeed, most of the current knowledge about the impact of CB on decision-making relies upon experimental research and group comparisons (Gilovich et al., 2002). For instance, based on the experimental result described above, one might wrongly infer that all judges are susceptible to anchoring, to the same extent. That is why Guthrie et al. (2007, p. 28) clarified that “the fact that we generally observed statistically significant differences between the control group judges and experimental group judges does not mean that every judge made intuitive decisions. […] Our results only show that, as a group, the judges were heavily influenced by their intuition – they do not tell us which judges were influenced and by how much.” In fact, there is clear evidence for individual differences in susceptibility to CB (e.g., Bruine de Bruin et al., 2007).

The issue of individual differences is of primary importance when considering CB in decision-making, especially among professionals. In finance for example, the measurement of the disposition effect at the individual level revealed significant individual differences, 20% of investors showing no disposition effect or a reverse effect (Talpsepp, 2011). Taking full account of individual differences is crucial when considering public interventions aiming to mitigate individual biases: any single intervention might work on individuals highly susceptible to the bias addressed while having no or even harmful effects on individuals moderately susceptible to it (Rachlinski, 2006).

Addressing the issue of individual differences in bias susceptibility requires having standardized, reliable measures (Berthet, 2021). While reliable measures of a dozen CB are currently available, measures of key biases are still lacking (e.g., confirmation bias and availability bias). Most importantly, these measures are generic, using non-contextualized items. Such measures are relevant for research with the purpose of describing general aspects of decision-making (Parker and Fischhoff, 2005; Bruine de Bruin et al., 2007). However, research on individual differences in professional decision-making requires specific measures which items are adapted to the context in which a particular decision is made (e.g., diagnostic decision and sentencing decision). An example is the inventory of cognitive biases in medicine (Hershberger et al., 1994) which aims to measure 10 CB in doctors (e.g., insensitivity to prior probability and insensitivity to sample size) through 22 medical scenarios. The development of such instruments in the context of management, finance, and law is an important avenue for future research on professional decision-making.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

1. ^It should be noted that most research in behavioral finance has focused on individual rather than professional investors (e.g., mutual funds, hedge funds, pension funds, and investment advisors). Findings suggest that institutional investors are prone to various CB but to a lesser extent than individual investors (e.g., Kaustia et al., 2008).

2. ^Interestingly, the notion of cognitive bias might also shed light on certain rules of law. For example, Guthrie et al. (2001) presented judges with a problem based on the classic English case Byrne v. Boadle (1863) asked them to assess the likelihood that a warehouse was negligent for an accident involving a barrel that injured a bystander. The materials indicated that when the warehouse is careful, accidents occur one time in 1,000, but that when the warehouse is negligent, accidents occur 90% of the time. The materials also indicated that the defendant is negligent only 1% of the time. Judges overestimated the probability that the defendant was negligent, failing to consider the base rate of negligence. Interestingly, this logical fallacy is implemented in the doctrine (res ipsa loquitur), which instructs judges to take no account of the base rates (Kaye, 1979).

3. ^Note that other CB, such as framing and omission bias, might also shed light on judicial decision-making (Rachlinski, 2018). In fact, judges decide cases differently depending on whether the underlying facts are presented as gains or losses (Rachlinski and Wistrich, 2018). Moreover, viewing the acceptance of a claim as the path of action and its dismissal as the path of inaction, omission bias might explain why the acceptance threshold of judges of a plaintiff’s claim is particularly high (Zamir and Ritov, 2012). However, those biases have been much less studied than anchoring and hindsight bias.

References

Baker, K. H., and Nofsinger, J. R. (2002). Psychological biases of investors. Financ. Ser. Rev. 11, 97–116.

Google Scholar

Baker, H. K., and Nofsinger, J. R. (Eds.). (2010). Behavioral Finance: Investors, Corporations, and Markets. Vol. 6. New York: John Wiley & Sons.

Google Scholar

Baker, H. K., and Ricciardi, V. (Eds.). (2014). Investor Behavior: The Psychology of Financial Planning and Investing. New York: John Wiley and Sons.

Google Scholar

Barber, B. M., and Odean, T. (2000). Trading is hazardous to your wealth: The common stock investment performance of individual investors. J. Financ. 55, 773–806. doi: 10.1111/0022-1082.00226

CrossRef Full Text | Google Scholar

Barber, B., and Odean, T. (2001). Boys will be boys: gender, overconfidence, and common stock investment. Q. J. Econ. 116, 261–292. doi: 10.1162/003355301556400

CrossRef Full Text | Google Scholar

Barnes, J. H. (1984). Cognitive biases and their impact on strategic planning. Strateg. Manag. J. 5, 129–137. doi: 10.1002/smj.4250050204

CrossRef Full Text | Google Scholar

Baron, J. (2008). Thinking and Deciding. 4th Edn. Cambridge: Cambridge University Press.

Google Scholar

Baron, J., and Hershey, J. C. (1988). Outcome bias in decision evaluation. J. Pers. Soc. Psychol. 54, 569–579. doi: 10.1037/0022-3514.54.4.569

CrossRef Full Text | Google Scholar

Baron, J., and Ritov, I. (2004). Omission bias, individual differences, and normality. Organ. Behav. Hum. Decis. Process. 94, 74–85. doi: 10.1016/j.obhdp.2004.03.003

CrossRef Full Text | Google Scholar

Bazerman, M. H. (1998). Judgment in Managerial Decision Making. New York: Wiley.

Google Scholar

Bazerman, M. H., and Moore, D. (2008). Judgment in Managerial Decision Making. 7th Edn. Hoboken, NJ: Wiley.

Google Scholar

Benartzi, S., and Thaler, R. H. (1995). Myopic loss aversion and the equity premium puzzle. Q. J. Econ. 110, 73–92. doi: 10.2307/2118511

CrossRef Full Text | Google Scholar

Ben-David, I., Graham, J., and Harvey, C. (2013). Managerial miscalibration. Q. J. Econ. 128, 1547–1584. doi: 10.1093/qje/qjt023

CrossRef Full Text | Google Scholar

Berlin, L., and Hendrix, R. W. (1998). Perceptual errors and negligence. AJR Am. J. Roentgenol. 170, 863–867. doi: 10.2214/ajr.170.4.9530024

PubMed Abstract | CrossRef Full Text | Google Scholar

Berthet, V. (2021). The measurement of individual differences in cognitive biases: A review and improvement. Front. Psychol. 12:630177. doi: 10.3389/fpsyg.2021.630177

PubMed Abstract | CrossRef Full Text | Google Scholar

Blumenthal-Barby, J. S., and Krieger, H. (2015). Cognitive biases and heuristics in medical decision making: A critical review using a systematic search strategy. Med. Decis. Mak. 35, 539–557. doi: 10.1177/0272989X14547740

CrossRef Full Text | Google Scholar

Bogner, M. S. (Ed.) (1994). Human Error in Medicine. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Google Scholar

Boolell-Gunesh, S., Broihanne, M., and Merli, M. (2009). Disposition effect, investor sophistication and taxes: Some French specificities. Finance 30, 51–78. doi: 10.3917/fina.301.0051

CrossRef Full Text | Google Scholar

Brenner, L. A., Koehler, D. J., Liberman, V., and Tversky, A. (1996). Overconfidence in probability and frequency judgments: A critical examination. Organ. Behav. Hum. Decis. Process. 65, 212–219. doi: 10.1006/obhd.1996.0021

CrossRef Full Text | Google Scholar

Bruine de Bruin, W., Parker, A. M., and Fischhoff, B. (2007). Individual differences in adult decision-making competence. J. Pers. Soc. Psychol. 92, 938–956. doi: 10.1037/0022-3514.92.5.938

CrossRef Full Text | Google Scholar

Bukszar, E., and Connolly, T. (1988). Hindsight bias and strategic choice: Some problems in learning from experience. Acad. Manag. J. 31, 628–641.

Google Scholar

Bystranowski, P., Janik, B., Próchnicki, M., and Skórska, P. (2021). Anchoring effect in legal decision-making: A meta-analysis. Law Hum. Behav. 45, 1–23. doi: 10.1037/lhb0000438

PubMed Abstract | CrossRef Full Text | Google Scholar

Casper, J. D., Benedict, K., and Perry, J. L. (1989). Juror decision making, attitudes, and the hindsight bias. Law Hum. Behav. 13, 291–310. doi: 10.1007/BF01067031

CrossRef Full Text | Google Scholar

Chapman, G. B., and Bornstein, B. H. (1996). The more you ask for, the more you get: anchoring in personal injury verdicts. Appl. Cogn. Psychol. 10, 519–540. doi: 10.1002/(SICI)1099-0720(199612)10:6<519::AID-ACP417>3.0.CO;2-5

CrossRef Full Text | Google Scholar

Cheney, F. W., Posner, K., Caplan, R. A., and Ward, R. J. (1989). Standard of care and anesthesia liability. JAMA 261, 1599–1603. doi: 10.1001/jama.1989.03420110075027

PubMed Abstract | CrossRef Full Text | Google Scholar

Chuang, W.-I., and Lee, B.-S. (2006). An empirical evaluation of the overconfidence hypothesis. J. Bank. Financ. 30, 2489–2515. doi: 10.1016/j.jbankfin.2005.08.007

CrossRef Full Text | Google Scholar

Cohen, M. D., March, J. G., and Olsen, J. P. (1972). A garbage can model of organizational choice. Adm. Sci. Q. 17, 1–25. doi: 10.2307/2392088

CrossRef Full Text | Google Scholar

Coval, J. D., and Moskowitz, T. J. (1999). Home bias at home: local equity preference in domestic portfolios. J. Financ. 54, 2045–2073. doi: 10.1111/0022-1082.00181

CrossRef Full Text | Google Scholar

Croskerry, P. (2003). The importance of cognitive errors in diagnosis and strategies to minimize them. Acad. Med. 78, 775–780. doi: 10.1097/00001888-200308000-00003

PubMed Abstract | CrossRef Full Text | Google Scholar

Croskerry, P., Singhal, G., and Mamede, S. (2013). Cognitive debiasing 1: origins of bias and theory of debiasing. BMJ Qual. Saf., 22(Suppl 2), 58–64. doi: 10.1136/bmjqs-2012-001712

PubMed Abstract | CrossRef Full Text | Google Scholar

Crowley, R. S., Legowski, E., Medvedeva, O., Reitmeyer, K., Tseytlin, E., Castine, M., et al. (2013). Automated detection of heuristics and biases among pathologists in a computer-based system. Adv. Health Sci. Educ. Theory Pract. 18, 343–363. doi: 10.1007/s10459-012-9374-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Danziger, S., Levav, J., and Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proc. Natl. Acad. Sci. 108, 6889–6892. doi: 10.1073/pnas.1018033108

PubMed Abstract | CrossRef Full Text | Google Scholar

Das, T. K., and Teng, B. (1999). Cognitive biases and strategic decision processes: An integrative perspective. J. Manag. Stud. 36, 757–778. doi: 10.1111/1467-6486.00157

CrossRef Full Text | Google Scholar

Dawson, N. V., and Arkes, H. R. (1987). Systematic errors in medical decision making. J. Gen. Intern. Med. 2, 183–187. doi: 10.1007/BF02596149

CrossRef Full Text | Google Scholar

De Bondt, W. F. (1998). A portrait of the individual investor. Eur. Econ. Rev. 42, 831–844. doi: 10.1016/S0014-2921(98)00009-9

CrossRef Full Text | Google Scholar

De Bondt, W. F. M., and Thaler, R. (1985). Does the stock market overreact? J. Financ. 40, 793–805. doi: 10.1111/j.1540-6261.1985.tb05004.x

CrossRef Full Text | Google Scholar

De Long, J. B., Shleifer, A., Summers, L. H., and Waldmann, R. J. (1990). Noise Trader Risk in Financial Markets. J. Polit. Econ. 98, 703–738.

PubMed Abstract | Google Scholar

Detmer, D. E., Fryback, D. G., and Gassner, K. (1978). Heuristics and biases in medical decision-making. J. Med. Educ. 53, 682–683.

PubMed Abstract | Google Scholar

Dhami, M. K., and Belton, I. K. (2017). On getting inside the judge’s mind. Trans. Issues Psychol. Sci. 3, 214–226. doi: 10.1037/tps0000115

CrossRef Full Text | Google Scholar

Diamond, S. S. (1997). Illuminations and shadows from jury simulations. Law Hum. Behav. 21, 561–571. doi: 10.1023/A:1024831908377

CrossRef Full Text | Google Scholar

Diamond, S. S., Saks, M. J., and Landsman, S. (1998). Jurors judgments about liability and damages: sources of variability and ways to increase consistency. DePaul Law Rev. 48, 301–325.

Google Scholar

Duhaime, I. M., and Schwenk, C. R. (1985). Conjectures on cognitive simplification in acquisition and divestment decision making. Acad. Manag. Rev. 10, 287–295. doi: 10.5465/amr.1985.4278207

CrossRef Full Text | Google Scholar

Ebbesen, E. B., and Konecni, V. J. (1975). Decision making and information integration in the courts: The setting of bail. J. Pers. Soc. Psychol. 32, 805–821. doi: 10.1037/0022-3514.32.5.805

CrossRef Full Text | Google Scholar

Eisenberg, T., Goerdt, J., Ostrom, B., Rottman, D., and Wells, M. T. (1997). The predictability of punitive damages. J. Leg. Stud. 26, 623–661. doi: 10.1086/468010

CrossRef Full Text | Google Scholar

Eisenberg, T., Hannaford-Agor, P. L., Heise, M., LaFountain, N., Munsterman, G. T., Ostrom, B., et al. (2006). Juries, judges, and punitive damages: empirical analyses using the civil justice survey of state courts 1992, 1996, and 2001 data. J. Empir. Leg. Stud. 3, 263–295. doi: 10.1111/j.1740-1461.2006.00070.x

CrossRef Full Text | Google Scholar

Eisenhardt, K. M., and Zbaracki, M. J. (1992). Strategic decision making. Strateg. Manag. J. 13, 17–37. doi: 10.1002/smj.4250130904

CrossRef Full Text | Google Scholar

Elstein, A. S. (1999). Heuristics and biases: selected errors in clinical reasoning. Acad. Med. 74, 791–794. doi: 10.1097/00001888-199907000-00012

CrossRef Full Text | Google Scholar

Englich, B., Mussweiler, T., and Strack, F. (2005). The last word in court--A hidden disadvantage for the defense. Law Hum. Behav. 29, 705–722. doi: 10.1007/s10979-005-8380-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Englich, B., Mussweiler, T., and Strack, F. (2006). Playing dice with criminal sentences: the influence of irrelevant anchors on experts’ judicial decision making. Personal. Soc. Psychol. Bull. 32, 188–200. doi: 10.1177/0146167205282152

PubMed Abstract | CrossRef Full Text | Google Scholar

Enough, B., and Mussweiler, T. (2001). Sentencing Under uncertainty: anchoring effects in the courtroom. J. Appl. Soc. Psychol. 31, 1535–1551. doi: 10.1111/j.1559-1816.2001.tb02687.x

CrossRef Full Text | Google Scholar

Findley, K. A., and Scott, M. S. (2006). The multiple dimensions of tunnel vision in criminal cases. Wis. Law Rev. 2, 291–398.

Google Scholar

Fischhoff, B. (1975). Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. J. Exp. Psychol. Hum. Percept. Perform. 1, 288–299.

Google Scholar

Forrow, L., Taylor, W. C., and Arnold, R. M. (1992). Absolutely relative: how research results are summarized can affect treatment decisions. Am. J. Med. 92, 121–124. doi: 10.1016/0002-9343(92)90100-P

PubMed Abstract | CrossRef Full Text | Google Scholar

Gigerenzer, G. (1991). “How to make cognitive illusions disappear: Beyond “heuristics and biases,”” in European Review of Social Psychology. Vol. 2 W. Stroebe and M. Hewstone (Eds.) (Chichester: Wiley), 83–115.

Google Scholar

Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reply to Kahneman and Tversky. Psychol. Rev. 103, 592–596. doi: 10.1037/0033-295X.103.3.592

CrossRef Full Text | Google Scholar

Gigerenzer, G., Hertwig, R., Hoffrage, U., and Sedlmeier, P. (2008). “Cognitive illusions reconsidered,” in Handbook of Experimental Economics Results. eds. C. R. Plott and V. L. Smith (Amsterdam: Elsevier), 1018–1034. doi: 10.1016/S1574-0722(07)00109-6

CrossRef Full Text | Google Scholar

Gilovich, T., Griffin, D., and Kahneman, D. (Eds.) (2002). Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press.

Google Scholar

Glaser, M., and Weber, M. (2007). Overconfidence and trading volume. Geneva Risk Insur. Rev. 32, 1–36. doi: 10.1007/s10713-007-0003-3

CrossRef Full Text | Google Scholar

Graber, M. L. (2013). The incidence of diagnostic error in medicine. BMJ Qual. Saf., 22(Suppl 2), 21–27. doi: 10.1136/bmjqs-2012-001615

PubMed Abstract | CrossRef Full Text | Google Scholar

Graber, M. L., Franklin, N., and Gordon, R. (2005). Diagnostic error in internal medicine. Arch. Intern. Med. 165, 1493–1499. doi: 10.1001/archinte.165.13.1493

CrossRef Full Text | Google Scholar

Graber, M., Gordon, R., and Franklin, N. (2002). Reducing diagnostic errors in medicine: what’s the goal? Acad. Med. 77, 981–992. doi: 10.1097/00001888-200210000-00009

PubMed Abstract | CrossRef Full Text | Google Scholar

Graber, M. L., Kissam, S., Payne, V. L., Meyer, A. N., Sorensen, A., Lenfestey, N., et al. (2012). Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual. Saf. 21, 535–557. doi: 10.1136/bmjqs-2011-000149

PubMed Abstract | CrossRef Full Text | Google Scholar

Grinblatt, M., Titman, S., and Wermers, R. (1995). Momentum investment strategies, portfolio performance, and herding: A study of mutual fund behavior. Am. Econ. Rev. 85, 1088–1105.

Google Scholar

Guthrie, C., Rachlinski, J. J., and Wistrich, A. J. (2001). Inside the judicial mind. Cornell Law Rev. 86, 777–830. doi: 10.2139/ssrn.257634

CrossRef Full Text | Google Scholar

Guthrie, C., Rachlinski, J. J., and Wistrich, A. J. (2002). Judging by heuristic: cognitive illusions in judicial decision making. Judicature 86, 44–50.

Google Scholar

Guthrie, C., Rachlinski, J., and Wistrich, A. J. (2007). Blinking on the bench: how judges decide cases. Cornell Law Rev. 93, 1–43.

Google Scholar

Hans, V. P., and Reyna, V. F. (2011). To dollars from sense: qualitative to quantitative translation in jury damage awards. J. Empir. Leg. Stud. 8, 120–147. doi: 10.1111/j.1740-1461.2011.01233.x

CrossRef Full Text | Google Scholar

Hardman, D., and Harries, C. (2002). How rational are we? Psychologist 15, 76–79.

Google Scholar

Harley, E. M. (2007). Hindsight bias in legal decision making. Soc. Cogn. 25, 48–63. doi: 10.1521/soco.2007.25.1.48

CrossRef Full Text | Google Scholar

Hart, A. J., Evans, D. L., Wissler, R. L., Feehan, J. W., and Saks, M. J. (1997). Injuries, prior beliefs, and damage awards. Behav. Sci. Law 15, 63–82. doi: 10.1002/(SICI)1099-0798(199724)15:1<63::AID-BSL254>3.0.CO;2-9

CrossRef Full Text | Google Scholar

Hastie, R., Schkade, D. A., and Payne, J. W. (1999). Juror judgments in civil cases: effects of plaintiff’s requests and plaintiff’s identity on punitive damage awards. Law Hum. Behav. 23, 445–470. doi: 10.1023/A:1022312115561

CrossRef Full Text | Google Scholar

Helm, R. K., Wistrich, A. J., and Rachlinski, J. J. (2016). Are arbitrators human? J. Empir. Leg. Stud. 13, 666–692. doi: 10.1111/jels.12129

CrossRef Full Text | Google Scholar

Hershberger, P. J., Part, H. M., Markert, R. J., Cohen, S. M., and Finger, W. W. (1994). Development of a test of cognitive bias in medical decision making. Acad. Med. 69, 839–842. doi: 10.1097/00001888-199410000-00014

CrossRef Full Text | Google Scholar

Hinsz, V. B., and Indahl, K. E. (1995). Assimilation to anchors for damage awards in a mock civil trial. J. Appl. Soc. Psychol. 25, 991–1026. doi: 10.1111/j.1559-1816.1995.tb02386.x

CrossRef Full Text | Google Scholar

Hodgkinson, G. (2001). “Cognitive processes in strategic management: some emerging trends and future directions,” in Handbook of Industrial, Work and Organizational Psychology Organizational Psychology. Vol. 2. eds. N. Anderson, D. S. Ones, and H. K. Sinangil (London: SAGE Publications Ltd.), 416–440.

Google Scholar

Hodgkinson, G. P., Bown, N. J., Maule, A. J., Glaister, K. W., and Pearman, A. D. (1999). Breaking the frame: an analysis of strategic cognition and decision making under uncertainty. Strateg. Manag. J. 20, 977–985. doi: 10.1002/(SICI)1097-0266(199910)20:10<977::AID-SMJ58>3.0.CO;2-X

CrossRef Full Text | Google Scholar

Huff, A. S., and Schwenk, C. (1990). “Bias and sensemaking in good times and bad,” in Mapping Strategic Thought. ed. A. S. Huff (Ed.) (Chichester, England: Wiley), 89–108.

Google Scholar

Johnson, G. (1987). Strategic Change and the Management Process. Oxford: Basil Blackwell.

Google Scholar

Joyce, E., and Biddle, G. (1981). Anchoring and adjustment in probabilistic inference in auditing. J. Account. Res. 19, 120–145. doi: 10.2307/2490965

CrossRef Full Text | Google Scholar

Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

Google Scholar

Kahneman, D., and Frederick, S. (2002). “Representativeness revisited: attribute substitution in intuitive judgment,” in Heuristics and Biases: The Psychology of Intuitive Judgment. T. Gilovich, D. Griffin, and D. Kahneman (Eds.) (Cambridge: Cambridge University Press), 103–119.

Google Scholar

Kahneman, D., Schkade, D., and Sunstein, C. (1998). Shared outrage and erratic awards: The psychology of punitive damages. J. Risk Uncertain. 16, 49–86. doi: 10.1023/A:1007710408413

CrossRef Full Text | Google Scholar

Kahneman, D., Slovic, P., and Tversky, A. (Eds.) (1982). Judgment Under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.

Google Scholar

Kahneman, D., and Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica 47, 263–291. doi: 10.2307/1914185

CrossRef Full Text | Google Scholar

Kamin, K. A., and Rachlinski, J. J. (1995). Ex post ≠ ex ante: determining liability in hindsight. Law Hum. Behav. 19, 89–104. doi: 10.1007/BF01499075

CrossRef Full Text | Google Scholar

Kaustia, M., Alho, E., and Puttonen, V. (2008). How much does expertise reduce behavioral biases? The case of anchoring effects in stock return estimates. Financ. Manag. 37, 391–412. doi: 10.1111/j.1755-053X.2008.00018.x

CrossRef Full Text | Google Scholar

Kaye, D. (1979). Probability theory meets res Ipsa loquitur. Mich. Law Rev. 77, 1456–1484. doi: 10.2307/1288109

CrossRef Full Text | Google Scholar

Klein, J. G. (2005). Five pitfalls in decisions about diagnosis and prescribing. BMJ 330, 781–783. doi: 10.1136/bmj.330.7494.781

PubMed Abstract | CrossRef Full Text | Google Scholar

Klein, D. E., and Mitchell, G. (Eds.) (2010). The Psychology of Judicial Decision Making. New York, NY: Oxford University Press.

Google Scholar

Kukucka, J., Kassin, S. M., Zapf, P. A., and Dror, I. E. (2017). Cognitive bias and blindness: A global survey of forensic science examiners. J. Appl. Res. Mem. Cogn. 6, 452–459. doi: 10.1016/j.jarmac.2017.09.001

CrossRef Full Text | Google Scholar

Kumar, S., and Goyal, N. (2015). Behavioural biases in investment decision making – A systematic literature review. Qual. Res. Financ. Markets 7, 88–108. doi: 10.1108/QRFM-07-2014-0022

CrossRef Full Text | Google Scholar

LaBine, S. J., and LaBine, G. (1996). Determinations of negligence and the hindsight bias. Law Hum. Behav. 20, 501–516. doi: 10.1007/BF01499038

CrossRef Full Text | Google Scholar

Lidén, M., Gräns, M., and Juslin, P. (2019). ‘Guilty, no doubt’: detention provoking confirmation bias in judges’ guilt assessments and debiasing techniques. Psychol. Crime Law 25, 219–247. doi: 10.1080/1068316X.2018.1511790

CrossRef Full Text | Google Scholar

Lusardi, A., and Mitchell, O. S. (2014). The economic importance of financial literacy: theory and evidence. J. Econ. Lit. 52, 5–44.

Google Scholar

Lyles, M. A., and Thomas, H. (1988). Strategic problem formulation: biases and assumptions embedded in alternative decision-making models. J. Manag. Stud. 25, 131–145. doi: 10.1111/j.1467-6486.1988.tb00028.x

CrossRef Full Text | Google Scholar

MacCoun, R. J. (1989). Experimental research on jury decision-making. Science 244, 1046–1050. doi: 10.1126/science.244.4908.1046

PubMed Abstract | CrossRef Full Text | Google Scholar

Malmendier, U., and Tate, G. (2005). CEO overconfidence and corporate investment. J. Financ. 60, 2661–2700. doi: 10.1111/j.1540-6261.2005.00813.x

CrossRef Full Text | Google Scholar

Malmendier, U., and Tate, G. (2008). Who makes acquisitions? CEO overconfidence and the market’s reaction. J. Financ. Econ. 89, 20–43. doi: 10.1016/j.jfineco.2007.07.002

CrossRef Full Text | Google Scholar

Mamede, S., van Gog, T., van den Berge, K., Rikers, R. M., van Saase, J. L., van Guldener, C., et al. (2010). Effect of availability bias and reflective reasoning on diagnostic accuracy among internal medicine residents. JAMA 304, 1198–1203. doi: 10.1001/jama.2010.1276

PubMed Abstract | CrossRef Full Text | Google Scholar

March, J. G., and Shapira, Z. (1987). Managerial perspectives on risk and risk taking. Manag. Sci. 33, 1404–1418. doi: 10.1287/mnsc.33.11.1404

CrossRef Full Text | Google Scholar

March, J. G., and Simon, H. A. (1958). Organizations. New York: Wiley.

Google Scholar

Maule, A. J., and Hodgkinson, G. P. (2002). Heuristics, biases and strategic decision making. Psychologist 15, 68–71.

Google Scholar

Mintzberg, H. (1983). Power In and Around Organizations. Englewood Cliffs, N.J: Prentice-Hall.

Google Scholar

Mohammed, S., and Schwall, A. (2012). Individual differences and decision making: what we know and where we go from here. Int. Rev. Ind. Organ. Psychol. 24, 249–312. doi: 10.1002/9780470745267.ch8

CrossRef Full Text | Google Scholar

Moore, D. A., Oesch, J. M., and Zietsma, C. (2007). What competition? Myopic self-focus in market-entry decisions. Organ. Sci. 18, 440–454. doi: 10.1287/orsc.1060.0243

CrossRef Full Text | Google Scholar

Moore, D. A., and Schatz, D. (2017). The three faces of overconfidence. Soc. Personal. Psychol. Compass 11:e122331. doi: 10.1111/spc3.12331

CrossRef Full Text | Google Scholar

Morewedge, C. K., Yoon, H., Scopelliti, I., Symborski, C., Korris, J., and Kassam, K. S. (2015). Debiasing decisions: improved decision making with a single training intervention. Policy Insights Behav. Brain Sci. 2, 129–140. doi: 10.1177/2372732215600886

CrossRef Full Text | Google Scholar

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 2, 175–220. doi: 10.1037/1089-2680.2.2.175

CrossRef Full Text | Google Scholar

O’Brien, B. (2009). Prime suspect: An examination of factors that aggravate and counteract confirmation bias in criminal investigations. Psychol. Public Policy Law 15, 315–334. doi: 10.1037/a0017881

CrossRef Full Text | Google Scholar

Odean, T. (1998). Are investors reluctant to realize their losses? J. Financ. 53, 1775–1798. doi: 10.1111/0022-1082.00072

CrossRef Full Text | Google Scholar

Odean, T. (1999). Do Investors trade too much? Am. Econ. Rev. 89, 1279–1298. doi: 10.1257/aer.89.5.1279

CrossRef Full Text | Google Scholar

Oeberst, A., and Goeckenjan, I. (2016). When being wise after the event results in injustice: evidence for hindsight bias in judges’ negligence assessments. Psychol. Public Policy Law 22, 271–279. doi: 10.1037/law0000091

CrossRef Full Text | Google Scholar

Ogdie, A. R., Reilly, J. B., Pang, W. G., Keddem, S., Barg, F. K., Von Feldt, J. M., et al. (2012). Seen through their eyes: residents’ reflections on the cognitive and contextual components of diagnostic errors in medicine. Acad. Med. 87, 1361–1367. doi: 10.1097/ACM.0b013e31826742c9

PubMed Abstract | CrossRef Full Text | Google Scholar

Parker, A. M., and Fischhoff, B. (2005). Decision-making competence: external validation through an individual-differences approach. J. Behav. Decis. Mak. 18, 1–27. doi: 10.1002/bdm.481

CrossRef Full Text | Google Scholar

Peer, E., and Gamliel, E. (2013). Heuristics and biases in judicial decisions. Court Rev. 49, 114–118.

Google Scholar

Perneger, T. V., and Agoritsas, T. (2011). Doctors and patients’ susceptibility to framing bias: A randomized trial. J. Gen. Intern. Med. 26, 1411–1417. doi: 10.1007/s11606-011-1810-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Pohl, R. F. (2017). “Cognitive illusions,” in Cognitive Illusions: Intriguing Phenomena in Thinking, Judgment and Memory (London; New York, NY: Routledge/Taylor&Francis Group), 3–21.

Google Scholar

Powell, T. C., Lovallo, D., and Fox, C. (2011). Behavioral strategy. Strateg. Manag. J. 32, 1369–1386. doi: 10.1002/smj.968

CrossRef Full Text | Google Scholar

Rachlinski, J. J. (2006). Cognitive errors, individual differences, and paternalism. Univ. Chicago Law Rev. 73, 207–229. doi: 10.1093/acprof:oso/9780199211395.003.0008

CrossRef Full Text | Google Scholar

Rachlinski, J. J. (2018). “Judicial decision-making,” in Behavioral Law and Economics. E. Zamir and D. Teichman (Eds.) (New York, NY: Oxford University Press), 525–565.

Google Scholar

Rachlinski, J. J., Guthrie, C., and Wistrich, A. J. (2011). Probable cause, probability, and hindsight. J. Empir. Leg. Stud. 8, 72–98. doi: 10.1111/j.1740-1461.2011.01230.x

CrossRef Full Text | Google Scholar

Rachlinski, J. J., Guthrie, C., and Wistrich, A. J. (2013). How lawyers’ intuitions prolong litigation. South. Calif. Law Rev. 86, 571–636.

Google Scholar

Rachlinski, J. J., and Wistrich, A. J. (2017). Judging the judiciary by the numbers: empirical research on judges. Ann. Rev. Law Soc. Sci. 13, 203–229. doi: 10.1146/annurev-lawsocsci-110615-085032

CrossRef Full Text | Google Scholar

Rachlinski, J. J., and Wistrich, A. J. (2018). Gains, losses, and judges: framing and the judiciary. Notre Dame Law Rev. 94, 521–582.

Google Scholar

Rachlinski, J., Wistrich, A., and Guthrie, C. (2015). Can judges make reliable numeric judgments? Distorted damages and skewed sentences. Indiana Law J. 90, 695–739.

Google Scholar

Redelmeier, D. A. (2005). The cognitive psychology of missed diagnoses. Ann. Intern. Med. 142, 115–120. doi: 10.7326/0003-4819-142-2-200501180-00010

CrossRef Full Text | Google Scholar

Robbennolt, J. K., and Studebaker, C. A. (1999). Anchoring in the courtroom: The effects of caps on punitive damages. Law Hum. Behav. 23, 353–373. doi: 10.1023/A:1022312716354

PubMed Abstract | CrossRef Full Text | Google Scholar

Saposnik, G., Redelmeier, D., Ruff, C. C., and Tobler, P. N. (2016). Cognitive biases associated with medical decisions: a systematic review. BMC Med. Inform. Decis. Mak. 6:138. doi: 10.1186/s12911-016-0377-1

CrossRef Full Text | Google Scholar

Schmitt, B. P., and Elstein, A. S. (1988). Patient management problems: heuristics and biases. Med. Decs. Making 8, 224–225.

PubMed Abstract | Google Scholar

Schnapp, B. H., Sun, J. E., Kim, J. L., Strayer, R. J., and Shah, K. H. (2018). Cognitive error in an academic emergency department. Diagnosis 5, 135–142. doi: 10.1515/dx-2018-0011

CrossRef Full Text | Google Scholar

Schwenk, C. R. (1982). Dialectical inquiry in strategic decision-making: A comment on the continuing debate. Strateg. Manag. J. 3, 371–373. doi: 10.1002/smj.4250030408

CrossRef Full Text | Google Scholar

Schwenk, C. R. (1984). Cognitive simplification processes in strategic decision-making. Strateg. Manag. J. 5, 111–128. doi: 10.1002/smj.4250050203

CrossRef Full Text | Google Scholar

Schwenk, C. R. (1985). Management illusions and biases: their impact on strategic decisions. Long Range Plan. 18, 74–80. doi: 10.1016/0024-6301(85)90204-3

CrossRef Full Text | Google Scholar

Schwenk, C. R. (1988). The cognitive perspective on strategic decision making. J. Manag. Stud. 25, 41–55. doi: 10.1111/j.1467-6486.1988.tb00021.x

CrossRef Full Text | Google Scholar

Sellier, A. L., Scopelliti, I., and Morewedge, C. K. (2019). Debiasing training improves decision making in the field. Psychol. Sci. 30, 1371–1379. doi: 10.1177/0956797619861429

PubMed Abstract | CrossRef Full Text | Google Scholar

Shefrin, H. (2000). Beyond Greed and Fear: Understanding Behavioral Finance and the Psychology of Investing. Boston: Harvard Business School Press.

Google Scholar

Shefrin, H., and Statman, M. (1985). The disposition to sell winners too early and ride losers too long: theory and evidence. J. Financ. 40, 777–790. doi: 10.1111/j.1540-6261.1985.tb05002.x

CrossRef Full Text | Google Scholar

Shiller, R. J. (2003). From efficient markets theory to behavioral finance. J. Econ. Perspect. 17, 83–104. doi: 10.1257/089533003321164967

CrossRef Full Text | Google Scholar

Stanovich, K. E., Toplak, M. E., and West, R. F. (2008). The development of rational thought: a taxonomy of heuristics and biases. Adv. Child Dev. Behav. 36, 251–285. doi: 10.1016/S0065-2407(08)00006-2

CrossRef Full Text | Google Scholar

Stanovich, K. E., West, R. F., and Toplak, M. E. (2011). “Individual differences as essential components of heuristics and biases research,” in The Science of Reason: A Festschrift for Jonathan St B. T. Evans. K. Manktelow, D. Over, and S. Elqayam (Eds.) (New York: Psychology Press), 355–396.

Google Scholar

Statman, M., Thorley, S., and Vorkink, K. (2006). Investor overconfidence and trading volume. Rev. Financ. Stud. 19, 1531–1565. doi: 10.1093/rfs/hhj032

CrossRef Full Text | Google Scholar

Stiegler, M. P., and Ruskin, K. J. (2012). Decision-making and safety in anesthesiology. Curr. Opin. Anaesthesiol. 25, 724–729. doi: 10.1097/ACO.0b013e328359307a

PubMed Abstract | CrossRef Full Text | Google Scholar

Talpsepp, T. (2011). Reverse disposition effect of foreign investors. J. Behav. Financ. 12, 183–200. doi: 10.1080/15427560.2011.606387

CrossRef Full Text | Google Scholar

Tversky, A., and Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cogn. Psychol. 5, 207–232. doi: 10.1016/0010-0285(73)90033-9

CrossRef Full Text | Google Scholar

Tversky, A., and Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases. Science 185, 1124–1131. doi: 10.1126/science.185.4157.1124

PubMed Abstract | CrossRef Full Text | Google Scholar

Tversky, A., and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science 211, 453–458. doi: 10.1126/science.7455683

CrossRef Full Text | Google Scholar

Vranas, P. B. M. (2000). Gigerenzer’s normative critique of Kahneman and Tversky. Cognition 76, 179–193. doi: 10.1016/S0010-0277(99)00084-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Wears, R. L., and Nemeth, C. P. (2007). Replacing hindsight with insight: toward better understanding of diagnostic failures. Ann. Emerg. Med. 49, 206–209. doi: 10.1016/j.annemergmed.2006.08.027

PubMed Abstract | CrossRef Full Text | Google Scholar

Weinshall-Margel, K., and Shapard, J. (2011). Overlooked factors in the analysis of parole decisions. Proc. Natl. Acad. Sci. 108:E833. doi: 10.1073/pnas.1110910108

PubMed Abstract | CrossRef Full Text | Google Scholar

Wissler, R. L., Hart, A. J., and Saks, M. J. (1999). Decision-making about general damages: A comparison of jurors, judges, and lawyers. Mich. Law Rev. 98, 751–826. doi: 10.2307/1290315

CrossRef Full Text | Google Scholar

Zajac, E. J., and Bazerman, M. H. (1991). Blind spots in industry and competitor analysis: implications of interfirm (mis)perceptions for strategic decisions. Acad. Manag. Rev. 16, 37–56. doi: 10.5465/amr.1991.4278990

CrossRef Full Text | Google Scholar

Zamir, E., and Ritov, I. (2012). Loss aversion, omission bias, and the burden of proof in civil litigation. J. Leg. Stud. 41, 165–207. doi: 10.1086/664911

CrossRef Full Text | Google Scholar

Zwaan, L., Monteiro, S., Sherbino, J., Ilgen, J., Howey, B., and Norman, G. (2017). Is bias in the eye of the beholder? A vignette study to assess recognition of cognitive biases in clinical case workups. BMJ Qual. Saf. 26, 104–110. doi: 10.1136/bmjqs-2015-005014

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: decision-making, cognitive biases, heuristics, management, finance, medicine, law

Citation: Berthet V (2022) The Impact of Cognitive Biases on Professionals’ Decision-Making: A Review of Four Occupational Areas. Front. Psychol. 12:802439. doi: 10.3389/fpsyg.2021.802439

Received: 26 October 2021; Accepted: 03 December 2021;
Published: 04 January 2022.

Edited by:

Sergio Da Silva, Federal University of Santa Catarina, Brazil

Reviewed by:

Silvia Lopes, Universidade de Lisboa, Portugal
Marcia Zindel, University of Brasilia, Brazil

Copyright © 2022 Berthet. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Vincent Berthet, vincent.berthet@univ-lorraine.fr

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.