Skip to main content

REVIEW article

Front. Psychol., 10 January 2020
Sec. Movement Science

Waste Reduction Strategies: Factors Affecting Talent Wastage and the Efficacy of Talent Selection in Sport

  • School of Kinesiology and Health Science, York University, Toronto, ON, Canada

Coaches are faced with the difficult task of identifying and selecting athletes to their team. Despite its widespread practice in sport, there is still much to learn about improving the identification and selection process. Evidence to date suggests selection decisions (at different competitive levels) can be inaccurate, bias driven, and sometimes even illogical. These mistakes are believed to contribute to “talent wastage,” the effect of a coach’s wrongful selection and/or deselection of an athlete to/from a team. Errors of this scale can lead to negative repercussions for all stakeholders involved and therefore deserve further exploration. It is the purpose of this paper to shed light on the potential factors influencing talent wastage and to illuminate possible psychological pitfalls when making decisions under uncertainty.

Introduction

In an effort to predict and select the next athletic superstar, substantial resources (e.g., time, money, and energy) are invested with the hope of gaining an edge over the competition. Although there is some evidence to show improvements in the identification and selection of athletes (Tetlock, 2016), research suggests that accuracy rates for predicting athlete potential remain quite low (Abbott and Collins, 2004; Vaeyens et al., 2008; Koz et al., 2012; Schorer et al., 2017; Johnston et al., 2018).

The talent selection process typically takes place early in an athlete’s life and involves administrative personnel (such as a coach, scout, or talent identifier1), who are tasked with identifying and predicting future athletic success. Often a series of tests (primarily focused on the physical or physiological attributes of an athlete) combined with coach observations (Christensen, 2009; Schorer et al., 2017) are used to assess performance (Lidor et al., 2009). Following this assessment period, talent selectors make decisions regarding which athletes should be included (selected) or excluded (de-selected) from the team. To date, there does not appear to be a “gold standard” approach to talent selection; rather, there appears to be a high degree of variability in the techniques, protocols, and processes used for assessment and selection. Current approaches range from subjective preferences and intuition (Williams and Reilly, 2000; Christensen, 2009; Lund and Söderström, 2017) and the use of standardized testing batteries (e.g., 40 meter sprint, vertical jump, Gabbett, 2009; Wells et al., 2009) to hypothesis-free machine-learning approaches2 (e.g., Güllich et al., 2019). Some researchers describe the talent selection approaches to be analytical and economically rational (Slack and Parent, 2006), while others have challenged the assumption that talent selection can be a rational or objective process (Cushion and Jones, 2006; Christensen, 2009; Lund and Söderström, 2017), describing the process as impulsive, irrational, atheoretical, and costly (Bar-Eli et al., 2011).

It has been acknowledged by coaches that selection and de-selection decisions are amongst the most challenging aspects of coaching (Capstick and Trudel, 2010). Not only does a wrongful selection or de-selection decision hurt the program from a performance and resource allocation perspective, but it could also lead to serious repercussions for the athlete. Pinder et al. (2013) called this wrongful inclusion or exclusion “talent wastage3” and proposed that a potentially large number of talented performers could be excluded from competitive sport opportunities. Once de-selected, the likelihood of an athlete reaching the elite levels of sport is greatly reduced (Huijgen et al., 2014). Moreover, athletes who have been de-selected from a team have reported feelings of anxiety, humiliation, anger, and a loss of athletic identity, sense of self, and connectedness to school (Grove et al., 2004; Barnett, 2007; Brown and Potrac, 2009; Blakelock et al., 2016; Neely et al., 2016). Over the past few years, there has been increased interest in understanding talent identification and selection with the goal of improving how these processes occur and thereby reducing talent wastage. In this paper, we summarize what is known about the factors affecting the efficacy of talent identification and selection in sport and highlight several areas where this process might be improved.

It is likely much of the “waste” is connected to the poor predictive capabilities of talent identification programs, which may be related to a number of different factors including: (a) a lack of understanding of what talent is and the way it manifests, (b) cognitive biases affecting human judgment, and (c) situational factors affecting the quality of decisions being made. With the limited research conducted directly on influences affecting talent selection in sport, this paper will explore research from other relevant domains (e.g., psychology, economics, and medicine). The present paper uses a critical review approach to extend beyond a mere description of relevant articles and to act as a “launch pad” for future development in the field, rather than answering a specific research question using a systematic approach (Grant and Booth, 2009). The aforementioned factors included for this critical review were determined through mapping review exercises to better understand the extent and gaps within the literature on the topic (Evidence for Policy and Practice Information and Co-ordinating Centre, 2006).

Limited Understanding of Talent and How It Evolves Over Time

Arguably, one of the most fundamental issues affecting the accuracy of talent predictions is the limited understanding about the phenomenon itself. In forecasting situations, decisions are made based on the availability of information and the combined assumptions about how that information relates to future performance (Schorer et al., 2017). Although seemingly straightforward, what information is deemed “important” and how that information relates to future talent remains relatively unknown. A recent systematic review conducted on talent identification research from 1990 to 2015 highlighted only 20 articles (from an original list of 1,696 articles) examined the differences between highly skilled and less-skilled athletes over a period of 1 year or more (Johnston et al., 2018). Results from this review speak to a lack of comparative, longitudinal studies and expose the limited knowledge we have about talent and how it can be effectively measured. Longitudinal studies reduce the likelihood of a biased sample of talented individuals leading to so called “survivor effects,” found when only examining those who stay in the system and assuming they reflect qualities needed for success. Even within this limited evidence base, there is large variation in the way talent is defined and likely an even greater degree of variation in the way it is understood and applied in practice. Baker et al. (2018) suggested future research may benefit from improved operationalizations of talent in order to better evaluate the validity of this concept (see also Baker and Wattie, 2018; Bergkamp et al., 2018).

In addition to definition-related issues, our understanding of how talent develops and evolves over time is limited. This is important because sporting organizations in many nations are increasingly tasked with the identification of talent at younger and younger ages (Williams and Reilly, 2000; Abbott and Collins, 2004; Lidor et al., 2009; Mann et al., 2017). Early identification processes have been reported to begin as early as 6 years of age (Baker and Wattie, 2018). Despite the prevalence of early selection practices, and how deeply rooted they are in athlete development programs, reliable and valid early indicators of adult performance have yet to be found (Ericsson and Charness, 1994, 1995; Ericsson, 1998; Nash and Collins, 2006; Wattie and Baker, 2017; Baker and Wattie, 2018).

This is likely related to the unsupported assumption that talent follows a predictable trajectory. Goodman (1946) called this assumption the “riddle of induction,” whereby evidence from the past leads to a rule intended to predict the future. The challenge of projecting from the past is that it creates a linear and causal model in a judge’s4 mind which may lead to problematic and restrictive ways of thinking (Taleb, 2007). These causal relationships (between variable(s) x and talent outcome y) are difficult to justify unless meeting the criteria that variable x precedes y temporally, is reliably correlated, and has a direct correlation with y beyond random chance (Kennedy, 1979). In reality, the evidence for talent development is not strong enough to support causal claims. As Simonton (1999) and Howe et al. (1998) noted, being talented at a young age does not necessarily lead to being talented later in life, or vice versa. Additionally, many of the qualities that distinguish top athletic performance in adults may only emerge later in development (Bloom, 1985; French and McPherson, 1999; Simonton, 1999; Morris, 2000). Moreover, some believe talent5 emerges out of dynamic networks with multiple components and multiple interactions speaking to the “unpredictable” nature of talent (Simonton, 1999; Phillips et al., 2010; Den Hartigh et al., 2016). The literature focusing on the accuracy of predictions in other disciplines such as meteorology and stock trading has highlighted a relationship between time and accuracy of predictions, whereby time is inversely related to accuracy (Swets, 1988; Silver, 2012). Poincaré (1913) argued that projections into the future require increasing amounts of knowledge and precision about the process under examination, as the rate for possible error grows rapidly. Unfortunately, the degree of precision necessary for effective predictions for talent in sport does not match the current degree of knowledge. For example, Silver (2012) noted that performance statistics taken from high school or college level in baseball hold barely any predictive power for future performances in the minor (AA and AAA) and major leagues. There is an added challenge for forecasters, and specifically talent selectors, to make predictions using variables that are in a state of change (Pearson et al., 2006; Elferink-Gemser et al., 2007; Vaeyens et al., 2008). For example, a female athlete between the ages of 11–14 is thought to be at her “peak height velocity,” a time characterized by rapid changes in height and weight (Philippaerts et al., 2006). Depending on when she is assessed, it may help or hinder her chance for selection to a team as many of her attributes and capabilities may fluctuate (Vaeyens et al., 2008). Although there are a number of tests demonstrating statistically significant associations with future sport success, such tests are questionable in their ability to accurately predict talent in sport (Bahr, 2016), especially given the unstable and dynamic nature of talent posing a potential infinite number of interactions to consider (Den Hartigh et al., 2016, 2018a). The combined effects of a limited understanding about talent and how it changes over time, have implications for effective talent selections. It is also likely this limited understanding is amplified by the many cognitive limitations that arise during decision-making procedures.

Cognitive Biases, Illusions and Perceptions Affecting Judgments About Talent

Human decision-making is beset by psychological pitfalls, something that has been more widely recognized in the last few decades. In addition to being prone to biases, humans (and therefore their decisions) have been shown to be highly influenced by emotion, fatigue, hunger, and mood (Danziger et al., 2011; Johnson and Tversky, 1983; Slovic et al., 2002; Västfjäll et al., 2014). Collectively, these studies speak to the reality of the “human effect” and demonstrate the difficulty in remaining unbiased and objective, even in the most objective professions.

Put succinctly, wherever there is a requirement for human decision-making, there is potential for error. Each decision maker will have his/her own preferences and values, but some common habits exist when forming judgments: (a) there is a tendency to rely on a relatively small number of cues (n = 3–5), (b) many judgments follow a linear and predictable way of thinking, and (c) there is a low degree of inter-judge agreement (Hastie and Dawes, 2001). This is not to paint all judges with the same negative brush, but rather to acknowledge that even “expert” judges adopt similar ways of thinking. Some of the linear and predictable ways of thinking include a tendency to (a) forget specifics and remember generalities, (b) to store memories differently depending on the way they were experienced, (c) to be drawn to details that confirm personal beliefs, (d) to find stories and patterns in sparse data, (e) to fill in characteristics to fit stereotypes and prior histories, and (f) to project current mindsets into the past and future (Benson, 2016). In the decision-making process, from formulating judgment to executing a decision, there are many opportunities for cognitive shortcuts. Some of these shortcuts (i.e., heuristics) can be helpful in the decision-making process and some can be hurtful (Simon et al., 2017). To highlight how these biases likely affect judgments regarding talent, we describe (a) personal preferences and intuition, (b) framing and the endowment effect, (c) the illusion of confidence, and (d) the primacy effect.

Personal Preferences, Beliefs, and Intuition

Perhaps the greatest influences affecting the selection of athletes are the preferences, beliefs and/or goals of the talent selector (Christensen, 2009; Jokuschies et al., 2017). A talent selector’s lived experiences along with the education and environment he/she was exposed to (known as tacit knowledge), are likely to influence the types of athletes selected (Cushion and Jones, 2006; Christensen, 2009; Lund and Söderström, 2017). However, few researchers have attempted to study how talent selectors develop, access, and utilize, knowledge at appropriate times and how that knowledge plays a role in their decision-making (for exceptions, see Cushion and Jones, 2006; Vrljic and Mallett, 2008; Christensen, 2009; Mills et al., 2012; Lund and Söderström, 2017). Simon (1955) observed that decision makers identify a relatively small number of cues to form simplified models to evaluate complex problems. This model is believed to reflect the decision-makers’ personal preferences, beliefs, or goals6 (Lund and Söderström, 2017). In a sport-related example, Bucci et al. (2012) recognized that coaches selected their “best” athletes based on their similarity to the coaching staff, and the alignment to the staff’s ideologies/values. This type of approach speaks to the importance of personal values and their role in influencing decisions.

Under conditions of uncertainty, talent selectors often have incomplete information (e.g., they may not know a player very well, may not know very much about their past performances, and may be uncertain about how the athlete will perform at a higher level). As a result, decisions may be influenced by a decision-maker’s intuition (Nash and Collins, 2006; Plessner et al., 2011). These automatic-thinking processes can be time-efficient strategies but can also lead to systematically flawed decision-making outcomes. Nash and Collins (2006) argued that as expertise grows, the decision-making process becomes less well-defined in a talent selector’s mind. Similarly, Davids and Myers (1990) believe that with increased expertise, there is a greater reliance on intuitive feelings. It is also important to consider talent selectors may think they are using intuition to make a decision, but in reality, have a well-defined approach for selection, but difficulty articulating their thoughts (Nash and Collins, 2006).

In a study exploring the sources of information coaches use to assess talent, Christensen (2009) found coaches tend to use their visual experience to recognize patterns to help identify talent, referred to as the “coaches’ eye”. What is not well known is whether this coaches’ eye differs from intuition and whether it is helpful in increasing the accuracy of talent predictions. It is possible the coaches’ eye is a superior selection approach as expert coaches have extensive domain specific knowledge (Côté et al., 1995; Nash and Collins, 2006), and are believed to think in different ways than non-experts (Chase and Simon, 1973; North et al., 2011). For example, it has been recognized that skilled and less-skilled individuals search for information and perceive their environment in a very different manner (Raab and Johnson, 20077; McRobert et al., 2009). Ericsson and Kintsch (1995) proposed skilled individuals have complex task-specific encoding skills and memory retrieval structures that differ from less-skilled individuals. However, further research is required to better understand what the “coaches’ eye” entails and its relative strengths and weaknesses in talent selection decisions (Williams and Reilly, 2000; Andersson et al., 2005; Vaeyens et al., 2008; Jokuschies et al., 2017).

Framing and the Endowment Effect

Many professional sports use a “draft” to select newly eligible athletes to their team. Because of the considerable cost of athlete salaries in many professional sports, draft selections come with considerable economic risk, as demonstrated by the discrepancy in salaries across draft rounds. It has been reported that the first overall pick can sign an initial contract of up to four times the amount of the last pick even in the same (i.e., first) round (Massey and Thaler, 2010). In their classic studies on the psychology of decision-making, Tversky and Kahneman (1981) demonstrated that the outcome of a decision depends on how the scenario is framed. For example, a question framed in terms of losses, often leads to a person making “risk adverse” decisions. In contrast, a question posed in terms of gains, often leads to more “risk seeking” decisions (Tversky and Kahneman, 1981). Given the generality of this cognitive bias, it is possible a talent selector who is told to acquire an athlete (framed in terms of gains) may become “risk seeking,” which could lead to overvaluing the desired player. On the other hand, if the talent selector is asked to trade an athlete on the team (framed in terms of losses), they may become more “risk adverse.” This relates to a recognized phenomenon called the “endowment effect,” where there is a tendency to overvalue things already owned and to undervalue things that are not owned (Kahneman et al., 1990, 1991).

Both “risk aversion” and the “endowment effect” are closely related to the “sunk-cost bias”, whereby the investment in something (time, energy, or money) leads to the feeling that one must get a worthy return on his/her investment (e.g., the feeling of obligation to drive to the symphony in a horrible snow storm only because a ticket had already been purchased). Often a sunk cost is accepted in an effort to avoid social or personal disproval. In sport, a talent selector may turn down a trade that he/she might have otherwise made due to the influence of the “endowment effect” or the “sunk cost bias,” which subsequently may affect the accuracy of talent selections (for examples, see Staw and Ross, 1989; Staw and Hoang, 1995; Lewis, 2016). Similarly, substantial monetary investments (mostly through signing bonuses) have been shown to lead NFL coaches to provide more playing opportunities to players drafted in higher rounds, despite these players do not outperform their counterparts selected in later rounds of the draft (Keefer, 2017).

The Illusion of Confidence

A relationship has been observed between perceived level of confidence and the accuracy of predictions. In many domains, confidence exceeds accuracy (Lichtenstein et al., 1982; Keren, 1991; McClelland and Bolger, 1994); examples include physicians’ predictions of pneumonia (Christensen-Szalanski and Bushyhead, 1981), economist’s quarterly forecasts of recession (Braun and Yaniv, 1992), and chess players’ predictions of their opponents’ moves (Griffin and Tversky, 1992). This overconfidence is believed to lead to relatively systematic errors in predictions (Kahneman and Tversky, 1973; Alpert and Raiffa, 1982) as it has been suggested those with increased levels of confidence are prone to greater levels of dispositional biases and/or illusions to avoid social disproval (Tsay and Banaji, 2011). For instance, the “confirmation bias” (Nickerson, 1998) is common in overly confident judges where there is a tendency to search for, focus on, and remember information in a way that corroborates his/her hypothesis. In another example, overly confident forecasters fell victim to “retrospective distortion” more frequency than their less-confident counterparts. Known as the “hindsight bias,” or the “knew it all along” effect, retrospective distortion is characterized by the tendency to see past events as being more predictable than they really are (Fischhoff and Beyth, 1975; Hertwig et al., 2003).

Psychologists Robyn Dawes, Paul Meehl, and Phil Tetlock are known as the “expert-busting” researchers (Lewis, 2016). Their studies have exposed an “expert problem” whereby, those who have a “bigger” reputation are often worse predictors than those who hold a less notable reputation in certain fields (Camerer and Johnson, 1997; Tetlock, 2005, 2016; Taleb, 2007). In his book, Meehl’s (1954) reviewed 20 studies showing that well-informed experts who predicted outcomes were not as accurate as a simple algorithm that added up objective data. A study by Tetlock (2005) surveyed political pundits who were asked to make predictions for multiple major events in 1980s and 1990s. Findings revealed the experts were only slightly more successful than random chance, worse than a basic statistical model of prediction, but reported high levels of overconfidence. Schorer et al. (2017) compared predictions between regional and national coaches in predicting the future performance of handball athletes and found there was little difference between levels of coaches. Last, newspaper tipsters were not found to be more successful in predicting soccer matches than the simple strategy of assuming home wins (Forrest and Simmons, 2000). It is important to acknowledge that not all of these examples directly relate to coaching expertise and their decision-making accuracy; however, it does raise important questions about the “expert effect” and how it might influence the accuracy of predictions for talent. These examples are not intended to allude that all experts are poor decisions makers (for counter examples see Tetlock, 2016), rather to highlight the importance of exploring confidence as a potential factor or proxy for illogical or error-filled decision-making processes for talent selection in sport.

Time to Make a Prediction and the Primacy Effect

In most cases, talent selectors have limited time to gather information about an athlete and whether he/she should be accepted to the team. It can be common for a talent selector to only have two or three interactions with an athlete before a judgment and subsequent decision is made. In junior ice hockey (e.g., House League), coaches draft players based on try outs over the span of a few days (Tromp et al., 2013). In Netherlands, talent identification and development programs for soccer at the youth and adolescent levels begin after the first day of training and subsequent de-selections are made on a daily basis thereafter (Huijgen et al., 2014). With such a constrained amount of time, a coach’s ability to make informed assumptions about an athlete’s potential is compromised. This is especially true if that athlete is not performing at his/her “best” during the assessment period (i.e., injury, personal circumstances, etc.).

Additionally, Nickerson (1998) noticed that a decision maker’s thoughts are often dominated by his/her initial impressions, known as the “primacy effect.” This primacy effect may hold particular interest to decision makers because a talent selector’s first impression may be the only impression that he/she remembers from a try-out or talent identification camp. If an athlete underperforms (compared to his/her standard) then that athlete may need to work even harder to impress the talent selector and to overcome the primacy effect (Silver, 2012).

Situational Factors

In addition to the previously mentioned influences affecting talent selection, there are situational factors that affect a talent selector’s accuracy. These factors include, (a) the use of standardized testing batteries, (b) the incorporation of machine based approaches, (c) politics or policy-related issues, (d) the number and personality of people in the decision-making process, and (e) the limited opportunities for feedback to update decisions.

Standardized Testing Batteries

To date, much of the research on talent identification has focused on the types of testing batteries used in talent identification programs (Lidor et al., 2005; Breitbach et al., 2014). Despite the focus on testing, there is little agreement on which tests reliably predict talent; moreover, very little is known about how test results influence the decision-making process. The type of testing battery, the execution and measurement of the tests, and the way the results are used may affect the accuracy of talent selection. Some of the most commonly used methods include physical and anthropometric testing (Gil et al., 2014), technical skill measurements (Williams and Reilly, 2000; Vaeyens et al., 2006; Waldron and Worsfold, 2010; Höner et al., 2017), assessment of tactical (Kannekens et al., 2011) and perceptual cognitive capabilities (Ward and Williams, 2003; Roca et al., 2012; Causer and Ford, 2014), as well as evaluation of psychological factors (Toering et al., 2009). In most studies, measurements have been “unidimensional” in nature with a focus on one area of performance (e.g., solely the physiology of the attribute). Within those unidimensional studies, there is little agreement on whether those factors reliably predict successful performance (Lidor and Lavyan, 2002; Lidor et al., 2005; Johnston et al., 2018). Moreover, the appropriate weight to give to an athlete’s scores on different tests is largely unknown. For example, if an athlete tests poorly on an agility drill, but outperforms her teammates in a scrimmage, how do these scores affect the coach’s evaluation of that athlete relative to selection? In essence, these issues relate to coaches’ “sensitivity” and “specificity” when classifying athletes (Parikh et al., 2008). If a coach has a high level of sensitivity, he/she has an increased likelihood of correctly selecting athletes who meet or exceed expectations. Similarly, a coach who has a high degree of specificity has an increased accuracy of de-selecting athletes who would have been true under-performers. Although, the ultimate level of sensitivity and specificity is difficult to determine because there is little way of knowing if the “correct” decision has been made (i.e., it is nearly impossible to determine whether the right athletes were selected or de-selected). This speaks to the importance of a coach knowing his/her comfort level with making a type 1 or type 2 error in the process. Until tests for identification and selection are sensitive enough to reflect the physical, psychological and cognitive aspects of sport, in both elite and lower levels of competition, caution should be taken to avoid an over-reliance on testing measures to categorize individuals as “talented” or “untalented.”

Pinder et al. (2013) proposed that a driving factor affecting the low degree of reliability is related to inappropriate measurements of talent. Many talent identification programs are accused of adopting testing batteries that do not accurately represent the sport demands (Pinder et al., 2013). This is often combined with reliance on a relatively small number of heavily weighted variables measured in isolation from the sport context (Abbott et al., 2005). It is also likely there is variability in the extent to which the same component contributes to successful performance across different sport domains, levels of competition, age of athletes, or even different playing positions within the same sport (Bergkamp et al., 2018). These non-representative, highly variable, and reductionist approaches have been recognized for limiting the ability to accurately test and identify talented athletes (for recent reviews on the fidelity of testing batteries see Bergkamp et al., 2019 and Den Hartigh et al., 2018b). A call from researchers has asked for more ecologically valid and representative designs that mirror the position-specific demands of the sport to adequately assess athletic performance (Pinder et al., 2013; Den Hartigh et al., 2018b). By rigorously studying an athlete’s development over a substantial period of time (more than one season) through a multidimensional lens (physiology, perceptual cognitive ability, psychology, and motor task ability, etc.), there is a greater likelihood for understanding the capabilities and limitations of measuring talent.

Machine-Based Approaches

One of the ways researchers and practitioners have tried to minimize the degree of variability due to human error and bias is by incorporating computer-based modeling. This can be done in two ways. First, many talent selectors at the professional level are turning to a blended approach to athlete selection, combining human judgment with “artificial intelligence.” In many professional sport leagues, the current debate is not whether statistics should be used in the decision-making process, but rather which statistics are best (Lewis, 2003, 2016). However, while this technique is starting to trickle down to lower levels of sport participation, little is known to date about the efficacy of prediction modeling for selection at younger ages of sport performance.

A second approach uses the computational power of modern technology to recognize more complex patterns of variable interaction. For instance, Güllich et al. (2019) used a machine-learning approach to identify patterns in key factors that distinguished super-elite from elite athletes in the United Kingdom. Conceptually, this approach considers possible patterns and interactions amongst a vastly superior number of variables than can be considered in traditional analyses. This approach among others (e.g., Maymin, 2017) allows researchers to test more complex and dynamic models without the statistical power requirements of approaches such as Analysis of Variance or multiple regression. What is yet to be determined is whether collecting and analyzing a greater number of variables can in fact lead to better predictions for sporting talent.

With growing reliance on technology, more research is illuminating the relative advantages and disadvantages of using computers to help in forecasting situations. For instance, more rapid and reliable decisions are not necessarily better decisions. Poor initial input will compromise the accuracy of predictions (i.e., the “garbage in, garbage out” analogy). Additionally, computers are reliant on sound and accurate models to form the basis for the coding underpinning the analysis and many (e.g., Abbott et al., 2005; Baker et al., 2018) have argued that current models of sporting talent are too simplistic. Interestingly, however, with the appropriate information, simple computer models have been shown to be very good at making predictions (Bejnordi et al., 2017). Even when people claim their mental models are more complex than a simple linear equation, an overwhelming amount of empirical research suggests that a basic equation does a surprisingly good job of capturing their judgment habits and in most cases, outperforms their predictions (Meehl, 1954; Sawyer, 1966; Goldberg, 1968; Einhorn, 1972; Libby, 1976; Cooksey, 1996; Grove and Meehl, 1996; Grove et al., 2000; Den Hartigh et al., 2018b). The studies also illuminated that experts correctly selected the variables that were important in making predictions, but surprisingly, the linear model that combined the variables and their associated weights outperformed the global judgments of the same experts. It will be important to learn how computer systems can help evaluators make talent selection decisions. More specifically, it will be important to learn how they may help to overcome the cognitive constraints explored in the section below.

Political and Policy-Related Issues

The accuracy of talent selections may also be related to the politics at play. For a talented athlete, there may be many reasons he/she is not selected to the team. For example, some teams must include a certain number of domestic and international players and may be forced to make decisions to reach certain quotas (Aarons, 2018). In another example, a coach or other staff member with a son/daughter in the sport may directly or indirectly influence the selection of his/her son/daughter at the expense of a more “suitable” athlete to the team.

There is also a natural tendency to listen to others who are in positions of power, who exude confidence, and have overbearing personalities (Surowiecki, 2004). For instance, people trust more “confident” financial advisers over those who are less “confident” even when their track records were identical (Tetlock, 2016). It is possible that a talent selector will follow the advice or encouragement of a colleague or a parent because of the perception of “authority” or perceived confidence. Knight and Harwood (2009) noted that youth sport coaches were concerned about parents’ reactions, and reported parents being a stressor in the selection process. It appears that coaches strive to make fair decisions about their selections, which could lead to a decision being made based on the desire to appease others (i.e., parents, staff, and friends).

Number of People Involved in Decision-Making

The accuracy of predictions is thought to be influenced by the number of people involved in the decision-making process. There is strong empirical and theoretical evidence demonstrating a benefit from aggregating different forecasts (Surowiecki, 2004; Silver, 2012; Budescu and Chen, 2014; Martire et al., 2018). Across a number of different disciplines, from medicine to political polling, the averaging of forecasts (rather than relying solely on one forecast) has been found to reduce error (Surowiecki, 2004; Yaniv, 2004; Hastie and Kameda, 2005; Silver, 2012). The exact number of forecasters needed to improve the accuracy of a prediction is still debated, but it appears there may be a “goldilocks-zone” between having two few and too many forecasters. Multiple advantages have been highlighted from applying the principles of “the wisdom of the crowd” and aggregating forecasts such as (a) maximizing the amount of information available to craft a judgment, (b) reducing the potential impact of an extreme source of information that may be unreliable (Ariely et al., 2000; Johnson et al., 2001), and (c) increasing the credibility and validity of the aggregation process (Wallsten and Diederich, 2001).

If individuals who evaluate sporting talent behave similarly to other prediction domains, selectors who include additional personnel from different perspectives in the decision-making process, may positively influence the accuracy of talent selections. This is likely dependent upon resources, program structure and situation constraints. For example, some programs may only have one coach (sometimes a parent) who is tasked with selection decisions, whereas other programs (e.g., Netherlands) have been reported to include trainers, coaches and technical staff in the selection process for adolescent soccer players (Huijgen et al., 2014). With an increased number of judges in the selection process, there is a greater likelihood of making a more rational and less biased prediction (Surowiecki, 2004). This statistical phenomenon known as “the wisdom of crowds” is rooted in the mathematical aggregation of individual estimates (Lorenz et al., 2011; Surowiecki, 2004. Under the right circumstances the wisdom of crowds effect can lead to surprisingly close estimations and predictions in different domains such as stock markets, political elections, and quiz shows (Surowiecki, 2004). Caution should be taken, however, as more people in the decision-making process does not always result in better decisions. For example, it has been demonstrated that even mild social influences can negatively influence the wisdom of crowds effect in simple estimation tasks (Lorenz et al., 2011).

Feedback Opportunities

The nature of talent identification programs limits the ability for a talent selector to observe his/her accuracy in making predictions. For a prediction to be considered “correct,” a mechanism for feedback must be available to the decision maker. Many coaches (especially at lower levels of competition) may only have one season with an athlete and therefore have limited knowledge of whether that athlete continued in competitive sport. It is possible that opportunities to receive feedback in such a long developmental pathway could be a limiting factor affecting accuracy rates. For instance, Tetlock (2016) noted police officers were not nearly as good as they thought they were at identifying guilty subjects from innocent ones, despite the fact they spend substantial amounts of time on such tasks as part of their duties. This is thought to relate to the fact that it often takes months or years for charges to be laid, trials to be run, and verdicts to be made. Even when there is a resolution, many factors may have influenced the outcome, and during that process, officers seldom receive clear feedback about whether their judgment was accurate (Tetlock, 2016). Conversely, a meteorologist is provided fairly instant feedback and their accuracy rates continue to improve. Future predictions could benefit from further research examining the possibilities for talent selector feedback.

Can Better Forecasts Be Made?

It may be true that talent in sport cannot be studied with the rigor of chemistry or geology, but that should not necessarily mean that a reliance on intuition and a “coach’s eye” should be encouraged. Drawing inferences from other disciplines will only lead us so far, which is why it will be important for future research to study decision-making in the specific and varied contexts of sport. Part of the solution could be to place a greater emphasis on studying the process of decision-making for talent selection rather than the outcome of the decision itself (i.e., what are the sources of information talent selectors use when shaping their beliefs about players’ skill levels?). This includes encouraging talent selectors to explicitly state their rules for decision-making and to add a weight to the judgment inputs used in their mental modeling (Musculus and Lobinger, 2018). Additionally. evidence from decision-making research encourages judges to express and quantify uncertainty in predictions by reporting a margin of error (Hastie and Dawes, 2001). This approach encourages judges to gather evidence in a meaningful way and provides a method to calibrate outcomes for feedback purposes (e.g., out of all the times you said there was a 40% chance, how often did that actually occur?). Combined with the recognition that our assumptions, biases and illusions play a role in distorting and interpreting signals we receive (Taleb, 2007; Silver, 2012), this approach to talent selection may help selectors better understand their own processes, give context and meaning to their approaches, and provide a method for checking accuracy.

As we enter the age of Big Data, with information and processing power increasing at startling rates, it is important to consider how we can incorporate computer-based modeling in a responsible way. It will be important to find a balance between combining the best of “artificial intelligence” and human capabilities to create models that are detailed enough to be helpful, but to also accurately represent the phenomenon (Silver, 2012; Den Hartigh et al., 2018b). As noted earlier, what is still unclear is whether the collection of more variables will in fact lead to better prediction. This highlights the importance of recognizing that if you cannot make a good prediction, sometimes it can be harmful to pretend you can, especially when this involves providing feedback about potential to young athletes and/or removing them from the athlete development system.

Conclusion

To date, the available evidence of the accuracy of talent decisions by talent selectors is not compelling (Schorer et al., 2017). In this review, we have summarized a range of potential factors that may explain, at least partially, these low accuracy rates. However, it is important to note much of this research has been done outside of sport and future work with evaluators in sport settings would help to inform our understanding of how judgments are formed and decisions are made, as well as the influences affecting them, which, in turn, has the potential to improve future predictions. With more effective decision-making procedures, it is possible to minimize talent wastage and minimize the risk of wrongfully de-selecting an athlete from the sport participation pathway.

Author Contributions

KJ and JB substantially contributed to the review. Both authors drafted the manuscript and revised it critically, gave their final approval of the current manuscript version to be published, and agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Funding

This work was supported in part by funds from the Social Sciences and Humanities Research Council of Canada (grant #862 – 2014 – 0001).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We would like to thank Lou Farah and Nima Dehghansai for their feedback and insight on the topic.

Footnotes

  1. ^ The term “talent selector” will be used throughout the paper to capture all three classifications.
  2. ^ In 2003 the Boston Celtics of the National Basketball Association selected Brandon Hunter  based  on  the output of computer algorithm. He became the first player to be picked by an “equation” (Lewis, 2016).
  3. ^ The term “talent wastage” refers to wastage of “potential” throughout the athlete development system leading to system inefficiencies.
  4. ^ For the purposes of this paper, the term “judge” will be used to represent a person crafting a judgment for decision to be made.
  5. ^ Some studies use synonyms such as “excellence” in reference to “talent.”
  6. ^ Simon’s “administrative” model of decision-making was applied to the economics domain but may hold relevance for understanding how coaches make decisions during selection processes (Neely et al., 2016).
  7. ^ The examples provided include participants who are in a temporarily constrained visual task, which challenges the transferability of findings to other domains.

References

Aarons, E. (2018). Premier League Rejects FA Proposal to Increase Homegrown Quotas. Available at: https://www.theguardian.com/football/2018/nov/21/premier-league-rejects-fa-proposal-homegrown-quotas (accessed November 21, 2018)Google Scholar

Abbott, A., Button, C., Pepping, G. J., and Collins, D. (2005). Unnatural selection: talent identification and development in sport. Nonlinear Dyn. Psychol. Life Sci. 9, 61–88.

PubMed Abstract | Google Scholar

Abbott, A., and Collins, D. (2004). Eliminating the dichotomy between theory and practice in talent identification and development: considering the role of psychology. J. Sports Sci. 22, 395–408. doi: 10.1080/02640410410001675324

PubMed Abstract | CrossRef Full Text | Google Scholar

Alpert, M., and Raiffa, H. (1982). “A progress report on the training of probability assessors,” in Judgement Under Uncertainty: Heuristics and Biases Kahneman, eds D. Slovic, and A. Tversky (New York, NY: Cambridge University Press), 294–305. doi: 10.1017/cbo9780511809477.022

CrossRef Full Text | Google Scholar

Andersson, P., Edman, J., and Ekman, M. (2005). Predicting the World Cup 2002 in soccer: performance and confidence of experts and non-experts. Int. J. Forecast. 21, 565–576. doi: 10.1016/j.ijforecast.2005.03.004

CrossRef Full Text | Google Scholar

Ariely, D., Au, W. T., Bender, R. H., Budescu, D. V., Dietz, C. B., Gu, H., et al. (2000). The effects of averaging subjective probability estimates between and within judges. J. Exp. Psychol. Appl. 6, 130–147. doi: 10.1037/1076-898x.6.2.130

PubMed Abstract | CrossRef Full Text | Google Scholar

Bahr, R. (2016). Why screening tests to predict injury do not work—and probably never will…: a critical review. Br. J. Sports Med. 50, 776–780. doi: 10.1136/bjsports-2016-096256

PubMed Abstract | CrossRef Full Text | Google Scholar

Baker, J., and Wattie, N. (2018). Innate talent in sport: separating myth from reality. Curr. Issues Sport Sci. 3:6. doi: 10.15203/CISS_2018.006

CrossRef Full Text | Google Scholar

Baker, J., Wattie, N., and Schorer, J. (2018). A proposed conceptualization of talent in sport: the first step in a long and winding road. Psychol. Sport Exerc. 43, 27–33. doi: 10.1016/j.psychsport.2018.12.016

CrossRef Full Text | Google Scholar

Bar-Eli, M., Plessner, H., and Raab, M. (2011). Judgment, Decision-Making and Success in Sport. Oxford, UK: Wiley Blackwell.

Google Scholar

Barnett, L. (2007). “Winners” and “losers”: the effects of being allowed or denied entry into competitive extracurricular activities. J. Leisure Res. 39, 316–344. doi: 10.1080/00222216.2007.11950110

CrossRef Full Text | Google Scholar

Bejnordi, B. E., Veta, M., Van Diest, P. J., Van Ginneken, B., Karssemeijer, N., Litjens, G., et al. (2017). Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 318, 2199–2210. doi: 10.1001/jama.2017.14585

PubMed Abstract | CrossRef Full Text | Google Scholar

Benson, B. (2016). Cognitive Bias Cheat Sheet. Better Humans. Available at: https://betterhumans.coach.me/cognitive-bias-cheat-sheet-55a472476b18 (accessed Dec 1, 2018).

Google Scholar

Bergkamp, T. L., Niessen, A. S. M., Den Hartigh, R. J., Frencken, W. G., and Meijer, R. R. (2018). Comment on: “Talent identification in sport: a systematic review”. Sports Med. 48, 1517–1519. doi: 10.1007/s40279-018-0868-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Bergkamp, T. L., Niessen, A. S. M., Den Hartigh, R. J., Frencken, W. G., and Meijer, R. R. (2019). Methodological issues in soccer talent identification research. Sports Med. 49, 1317–1335. doi: 10.1007/s40279-019-01113-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Blakelock, D. J., Chen, M. A., and Prescott, T. (2016). Psychological distress in elite adolescent soccer players following deselection. J. Clin. Sport Psychol. 10, 59–77. doi: 10.1123/jcsp.2015-0010

CrossRef Full Text | Google Scholar

Bloom, B. S. (1985). Developing Talent in Young People. New York, NY: Ballantine Books.

Google Scholar

Braun, P. A., and Yaniv, I. (1992). A case study of expert judgment: economists’ probabilities versus base-rate model forecasts. J. Behav. Decis. Mak. 5, 217–231. doi: 10.1002/bdm.3960050306

CrossRef Full Text | Google Scholar

Breitbach, S., Tug, S., and Simon, P. (2014). Conventional and genetic talent identification in sports: will recent developments trace talent? Sports Med. 44, 1489–1503. doi: 10.1007/s40279-014-0221-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Brown, G., and Potrac, P. (2009). ‘You’ve not made the grade, son’: de-selection and identity disruption in elite level youth football. Soccer Soc. 10, 143–159. doi: 10.1080/14660970802601613

CrossRef Full Text | Google Scholar

Bucci, J., Bloom, G. A., Loughead, T. M., and Caron, J. G. (2012). Ice hockey coaches’ perceptions of athlete leadership. J. Appl. Sport Psychol. 24, 243–259. doi: 10.1080/10413200.2011.636416

CrossRef Full Text | Google Scholar

Budescu, D. V., and Chen, E. (2014). Identifying expertise to extract the wisdom of crowds. Manag. Sci. 61, 267–280. doi: 10.1287/mnsc.2014.1909

CrossRef Full Text | Google Scholar

Camerer, C. F., and Johnson, E. J. (1997). “The process-performance paradox in expert judgment - How can experts know so much and predict so badly?,” in Toward a General Theory of Expertise: Prospects and Limits, eds A. Ericsson, and J. Smith (Cambridge: Cambridge University Press).

Google Scholar

Capstick, A. L., and Trudel, P. (2010). Coach communication of non-selection in youth competitive sport. Int. J. Coach. Sci. 4, 3–23.

Google Scholar

Causer, J., and Ford, P. R. (2014). “Decisions, decisions, decisions”: transfer and specificity of decision-making skill between sports. Cogn. Process. 15, 385–389. doi: 10.1007/s10339-014-0598-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Chase, W. G., and Simon, H. A. (1973). Perception in chess. Cognit. Psychol. 4, 55–81. doi: 10.1016/0010-0285(73)90004-2

CrossRef Full Text | Google Scholar

Christensen, M. K. (2009). “An eye for talent”: talent identification and the “practical sense” of top-level soccer coaches. Sociology of Sport Journal 26, 365–382. doi: 10.1123/ssj.26.3.365

CrossRef Full Text | Google Scholar

Christensen-Szalanski, J. J., and Bushyhead, J. B. (1981). Physicians’ use of probabilistic information in a real clinical setting. J. Exp. Psychol. Hum. Percept. Perform. 7, 928–935. doi: 10.1037//0096-1523.7.4.928

PubMed Abstract | CrossRef Full Text | Google Scholar

Cooksey, R. W. (1996). Judgment Analysis: Theory, Methods, and Applications. San Diego, CA: Academic Press.

Google Scholar

Côté, J., Salmela, J., Trudel, P., Baria, A., and Russell, S. (1995). The coaching model: a grounded assessment of expert gymnastic coaches’ knowledge. J. Sport Exerc. Psychol. 17, 1–17. doi: 10.1123/jsep.17.1.1

CrossRef Full Text | Google Scholar

Cushion, C., and Jones, R. L. (2006). Power, discourse, and symbolic violence in professional youth soccer: the case of albion football club. Sociol. Sport J. 23, 142–161. doi: 10.1123/ssj.23.2.142

CrossRef Full Text | Google Scholar

Danziger, S., Levav, J., and Avnaim-Pesso, L. (2011). Extraneous factors in judicial decisions. Proc. Natl. Acad. Sci. U.S.A. 108, 6889–6892. doi: 10.1073/pnas.1018033108

PubMed Abstract | CrossRef Full Text | Google Scholar

Davids, K., and Myers, C. (1990). The role of tacit knowledge in human skill performance. J. Hum. Mov. Stud. 19, 273–288.

Google Scholar

Den Hartigh, R. J., Hill, Y., and Van Geert, P. L. C. (2018a). The development of talent insports: a dynamic network approach. Complexity 2018:9280154. doi: 10.1155/2018/9280154

CrossRef Full Text | Google Scholar

Den Hartigh, R. J., Niessen, A. S. M., Frencken, W. G., and Meijer, R. R. (2018b). Selection procedures in sports: improving predictions of athletes’ future performance. Eur. J. Sport Sci. 18, 1191–1198. doi: 10.1080/17461391.2018.1480662

PubMed Abstract | CrossRef Full Text | Google Scholar

Den Hartigh, R. J., Van Dijk, M. W., Steenbeek, H. W., and Van Geert, P. L. (2016). A dynamic network model to explain the development of excellent human performance. Front. Psychol. 7:532. doi: 10.3389/fpsyg.2016.00532

PubMed Abstract | CrossRef Full Text | Google Scholar

Einhorn, H. J. (1972). Expert measurement and mechanical combination. Organ. Behav. Hum. Perform. 7, 86–106. doi: 10.1016/0030-5073(72)90009-8

CrossRef Full Text | Google Scholar

Elferink-Gemser, M. T., Visscher, C., Lemmink, K. A., and Mulder, T. (2007). Multidimensional performance characteristics and standard of performance in talented youth field hockey players: a longitudinal study. J. Sports Sci. 25, 481–489. doi: 10.1080/02640410600719945

PubMed Abstract | CrossRef Full Text | Google Scholar

Ericsson, K. A. (1998). The scientific study of expert levels of performance: general implications for optimal learning and creativity. High Abil. Stud. 9, 75–100. doi: 10.1080/1359813980090106

CrossRef Full Text | Google Scholar

Ericsson, K. A., and Charness, N. (1994). Expert performance: its structure and acquisition. Am. Psychol. 49, 725–747. doi: 10.1037//0003-066x.49.8.725

CrossRef Full Text | Google Scholar

Ericsson, K. A., and Charness, N. (1995). Abilities: evidence for talent or characteristics acquired through engagement in relevant activities? Am. Psychol. 50, 803–804.

Google Scholar

Ericsson, K. A., and Kintsch, R. T. (1995). Long-term working memory. Psychol. Rev. 102, 211–245.

Google Scholar

Evidence for Policy and Practice Information and Co-ordinating Centre, (2006). Descriptive Mapping. Available at http://eppi.ioe.ac.uk/cms/Default.aspx?tabid=175&language=en-US (accessed November 5, 2019)Google Scholar

Fischhoff, B., and Beyth, R. (1975). I knew it would happen: remembered probabilities of once—future things. Organ. Behav. Hum. Perform. 13, 1–16. doi: 10.1016/0030-5073(75)90002-1

CrossRef Full Text | Google Scholar

Forrest, D., and Simmons, R. (2000). Forecasting sport: the behaviour and performance of football tipsters. Int. J. Forecast. 16, 317–331. doi: 10.1016/s0169-2070(00)00050-9

CrossRef Full Text | Google Scholar

French, K. E., and McPherson, S. L. (1999). Adaptations in response selection processes used during sport competition with increasing age and expertise. Int. J. Sport Psychol. 30, 173–193.

Google Scholar

Gabbett, T. J. (2009). Physiological and anthropometric characteristics of starters and non-starters in junior rugby league players, aged 13-17 years. J. Sports Med. Phys. Fitness 49, 233–239.

PubMed Abstract | Google Scholar

Gil, S. M., Zabala-Lili, J., Bidaurrazaga-Letona, I., Aduna, B., Lekue, J. A., Santos-Concejero, J., et al. (2014). Talent identification and selection process of outfield players and goalkeepers in a professional soccer club. J. Sports Sci. 32, 1931–1939. doi: 10.1080/02640414.2014.964290

PubMed Abstract | CrossRef Full Text | Google Scholar

Goldberg, L. R. (1968). Simple models or simple processes? Some research on clinical judgments. Am. Psychol. 23, 483–496. doi: 10.1037/h0026206

PubMed Abstract | CrossRef Full Text | Google Scholar

Goodman, N. (1946). A query on confirmation. J. Philos. 43, 383–385.

Google Scholar

Grant, M. J., and Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Inform. Libr. J. 26, 91–108. doi: 10.1111/j.1471-1842.2009.00848.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Griffin, D., and Tversky, A. (1992). The weighing of evidence and the determinants of confidence. Cognit. Psychol. 24, 411–435. doi: 10.1016/0010-0285(92)90013-r

CrossRef Full Text | Google Scholar

Grove, J. R., Fish, M., and Eklund, R. C. (2004). Changes in athletic identity following team selection: self-protection versus self-enhancement. J. Appl. Sport Psychol. 16, 75–81. doi: 10.1080/10413200490260062

CrossRef Full Text | Google Scholar

Grove, W. M., and Meehl, P. E. (1996). Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: the clinical–statistical controversy. Psychol. Public Policy Law 2, 293–323. doi: 10.1037//1076-8971.2.2.293

CrossRef Full Text | Google Scholar

Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., and Nelson, C. (2000). Clinical versus mechanical prediction: a meta-analysis. Psychol. Assess. 12, 19–30. doi: 10.1037//1040-3590.12.1.19

PubMed Abstract | CrossRef Full Text | Google Scholar

Güllich, A., Hardy, L., Kuncheva, L., Laing, S., Barlow, M., Evans, L., et al. (2019). Developmental biographies of Olympic super-elite and elite athletes: a multidisciplinary pattern recognition analysis. J. Exp. 2, 23–46.

Google Scholar

Hastie, R., and Dawes, R. (2001). Rational Choice in an Uncertain World: The Psychology of Judgement and Decision Making. Thousand Oaks, CA: Sage Publications.

Google Scholar

Hastie, R., and Kameda, T. (2005). The robust beauty of majority rules in-group decisions. Psychol. Rev. 112, 494–508. doi: 10.1037/0033-295x.112.2.494

PubMed Abstract | CrossRef Full Text | Google Scholar

Hertwig, R., Fanselow, C., and Hoffrage, U. (2003). Hindsight bias: how knowledge and heuristics affect our reconstruction of the past. Memory 11, 357–377. doi: 10.1080/09658210244000595

PubMed Abstract | CrossRef Full Text | Google Scholar

Höner, O., Leyhr, D., and Kelava, A. (2017). The influence of speed abilities and technical skills in early adolescence on adult success in soccer: a long-term prospective analysis using ANOVA and SEM approaches. PLoS One 12:e0182211. doi: 10.1371/journal.pone.0182211

PubMed Abstract | CrossRef Full Text | Google Scholar

Howe, M., Davidson, J., and Sloboda, J. (1998). Innate talents: reality or myth? Behav. Brain Sci. 21, 399–442.

PubMed Abstract | Google Scholar

Huijgen, B. C., Elferink-Gemser, M. T., Lemmink, K. A., and Visscher, C. (2014). Multidimensional performance characteristics in selected and deselected talented soccer players. Eur. J. Sport Sci. 14, 2–10. doi: 10.1080/17461391.2012.725102

PubMed Abstract | CrossRef Full Text | Google Scholar

Johnson, E. J., and Tversky, A. (1983). Affect, generalization, and the perception of risk. J. Pers. Soc. Psychol. 45, 20–31. doi: 10.1037/0022-3514.45.1.20

CrossRef Full Text | Google Scholar

Johnson, T. R., Budescu, D. V., and Wallsten, T. S. (2001). Averaging probability judgments: monte Carlo analyses of asymptotic diagnostic value. J. Behav. Decis. Mak. 14, 123–140. doi: 10.1002/bdm.369.abs

CrossRef Full Text | Google Scholar

Johnston, K., Wattie, N., Schorer, J., and Baker, J. (2018). Talent identification in sport: a systematic review. Sports Med. 48, 97–109. doi: 10.1007/s40279-017-0803-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Jokuschies, N., Gut, V., and Conzelmann, A. (2017). Systematizing coaches’ ‘eye for talent’: player assessments based on expert coaches’ subjective talent criteria in top-level youth soccer. Int. J. Sports Sci. Coach. 12, 565–576. doi: 10.1177/1747954117727646

CrossRef Full Text | Google Scholar

Kahneman, D., Knetsch, J. L., and Thaler, R. H. (1990). Experimental tests of the endowment effect and the Coase theorem. J. Polit. Econ. 98, 1325–1348. doi: 10.1086/261737

CrossRef Full Text | Google Scholar

Kahneman, D., Knetsch, J. L., and Thaler, R. H. (1991). Anomalies: the endowment effect, loss aversion, and status quo bias. J. Econ. Perspect. 5, 193–206. doi: 10.1257/jep.5.1.193

CrossRef Full Text | Google Scholar

Kahneman, D., and Tversky, A. (1973). On the psychology of prediction. Psychol. Rev. 80, 237–251.

Google Scholar

Kannekens, R., Elferink-Femser, M. T., and Visscher, C. (2011). Positioning and deciding: key factors for talent development in soccer. Scand. J. Med. Sci. Sports 21, 846–852. doi: 10.1111/j.1600-0838.2010.01104.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Keefer, Q. A. W. (2017). The sunk-cost fallacy in the national football league: salary cap value and playing time. J. Sports Econ. 18, 282–297. doi: 10.1177/1527002515574515

CrossRef Full Text | Google Scholar

Kennedy, D. A. (1979). Correlation and Causality. New York, NY: Wiley-Interscience.

Google Scholar

Keren, G. (1991). Calibration and probability judgements: conceptual and methodological issues. Acta Psychol. 77, 217–273. doi: 10.1016/0001-6918(91)90036-y

CrossRef Full Text | Google Scholar

Knight, C. J., and Harwood, C. G. (2009). Exploring parent-related coaching stressors in British tennis: a developmental investigation. Int. J. Sports Sci. Coach. 4, 545–565. doi: 10.1260/174795409790291448

CrossRef Full Text | Google Scholar

Koz, D., Fraser-Thomas, J., and Baker, J. (2012). Accuracy of professional sports drafts in predicting career potential. Scand. J. Med. Sci. Sports 22, e64–e69. doi: 10.1111/j.1600-0838.2011.01408.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Lewis, M. (2003). Moneyball: The Art of Winning an Unfair Game. New York, NY: WW Norton & Company.

Google Scholar

Lewis, M. (2016). The Undoing Project: A Friendship that Changed Our Minds. New York, NY: WW Norton & Company.

Google Scholar

Libby, R. (1976). Man versus model of man: some conflicting evidence. Organ. Behav. Hum. Perform. 16, 1–12. doi: 10.1016/0030-5073(76)90002-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Lichtenstein, S., Fischhoff, B., and Phillips, L. (1982). Calibration of Probabilities: The state of the Art to 1980. Woodland Hills, CA: Perception Inc.

Google Scholar

Lidor, R., Côté, J., and Hackfort, D. (2009). ISSP position stand: to test or not to test? The use of physical skill tests in talent detection and in early phases of sport development. Int. J. Sport Exerc. Psychol. 7, 131–146. doi: 10.1080/1612197x.2009.9671896

PubMed Abstract | CrossRef Full Text | Google Scholar

Lidor, R., Falk, B., Arnon, M., Cohen, Y., Segal, G., and Lander, Y. (2005). Measurement of talent in team handball: the questionable use of motor and physical tests. J. Strength Cond. Res. 19, 318. doi: 10.1519/00124278-200505000-00014

PubMed Abstract | CrossRef Full Text | Google Scholar

Lidor, R., and Lavyan, N. Z. (2002). A retrospective picture of early sport experiences among elite and near-elite Israeli athletes: developmental and psychological perspectives. Int. J. Sport Psychol. 33, 269–289.

Google Scholar

Lorenz, J., Rauhut, H., Schweitzer, F., and Helbing, D. (2011). How social influence can undermine the wisdom of crowd effect. Proc. Natl. Acad. Sci. U.S.A. 108, 9020–9025. doi: 10.1073/pnas.1008636108

PubMed Abstract | CrossRef Full Text | Google Scholar

Lund, S., and Söderström, T. (2017). To see or not to see: talent identification in the Swedish Football Association. Sociol. Sport J. 34, 248–258. doi: 10.1123/ssj.2016-0144

CrossRef Full Text | Google Scholar

Mann, D. L., Dehghansai, N., and Baker, J. (2017). Searching for the elusive gift: advances in talent identification in sport. Curr. Opin. Psychol. 16, 128–133. doi: 10.1016/j.copsyc.2017.04.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Martire, K. A., Growns, B., and Navarro, D. J. (2018). What do the experts know? Calibration, precision, and the wisdom of crowds among forensic handwriting experts. Psychon. Bull. Rev. 25, 2346–2355. doi: 10.3758/s13423-018-1448-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Massey, C., and Thaler, R. H. (2010). The Losers Curse: Overconfidence vs Market Efficiency in the National Football League draft. Chicago: University of Chicago.

Google Scholar

Maymin, P. (2017). The automated general manager: can an algorithmic system for drafts, trades, and free agency outperform human front offices? J. Glob. Sport Manag. 2, 234–249. doi: 10.1080/24704067.2017.1389248

CrossRef Full Text | Google Scholar

McClelland, A. G., and Bolger, F. (1994). “The calibration of subjective probability: theories and models 1980-1993,” in Subjective Probabilities, eds G. Write, and P. Ayton (Chichester: Wiley), 435–482.

Google Scholar

McRobert, A. P., Williams, A. M., Ward, P., and Eccles, D. W. (2009). Tracing the process of expertise in a simulated anticipation task. Ergonomics 52, 474–483. doi: 10.1080/00140130802707824

PubMed Abstract | CrossRef Full Text | Google Scholar

Meehl, P. E. (1954). Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. Minneapolis: University of Minnesota Press.

Google Scholar

Mills, A., Butt, J., Maynard, I., and Harwood, C. (2012). Identifying factors perceived to influence the development of elite youth football academy players. J. Sports Sci. 30, 1593–1604. doi: 10.1080/02640414.2012.710753

PubMed Abstract | CrossRef Full Text | Google Scholar

Morris, T. (2000). Psychological characteristics and talent identification in soccer. J. Sports Sci. 18, 715–726. doi: 10.1080/02640410050120096

PubMed Abstract | CrossRef Full Text | Google Scholar

Musculus, L., and Lobinger, B. H. (2018). Psychological characteristics in talented soccer players–recommendations on how to improve coaches’ assessment. Front. Psychol. 9:41. doi: 10.3389/fpsyg.2018.00041

PubMed Abstract | CrossRef Full Text | Google Scholar

Nash, C., and Collins, D. (2006). Tacit knowledge in expert coaching: science or art? Quest 58, 465–477. doi: 10.1080/00336297.2006.10491894

CrossRef Full Text | Google Scholar

Neely, K. C., Dunn, J. G., McHugh, T. L. F., and Holt, N. L. (2016). The deselection process in competitive female youth sport. Sport Psychol. 30, 141–153. doi: 10.1123/tsp.2015-0044

CrossRef Full Text | Google Scholar

Nickerson, R. S. (1998). Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gen. Psychol. 2, 175–220. doi: 10.1037//1089-2680.2.2.175

CrossRef Full Text | Google Scholar

North, J. S., Ward, P., Ericsson, A., and Williams, A. M. (2011). Mechanisms underlying skilled anticipation and recognition in a dynamic and temporally constrained domain. Memory 19, 155–168. doi: 10.1080/09658211.2010.541466

PubMed Abstract | CrossRef Full Text | Google Scholar

Parikh, R., Mathai, A., Parikh, S., Sekhar, G. C., and Thomas, R. (2008). Understanding and using sensitivity, specificity and predictive values. Indian J. Ophthalmol. 56, 45–50.

PubMed Abstract | Google Scholar

Pearson, D. T., Naughton, G. A., and Torode, M. (2006). Predictability of physiological testing and the role of maturation in talent identification for adolescent team sports. J. Sci. Med. Sport 9, 277–287. doi: 10.1016/j.jsams.2006.05.020

PubMed Abstract | CrossRef Full Text | Google Scholar

Philippaerts, R. M., Vaeyens, R., Janssens, M., Van Renterghem, B., Matthys, D., Craen, R., et al. (2006). The relationship between peak height velocity and physical performance in youth soccer players. J. Sports Sci. 24, 221–230. doi: 10.1080/02640410500189371

PubMed Abstract | CrossRef Full Text | Google Scholar

Phillips, E., Davids, K., Renshaw, I., and Portus, M. (2010). Expert performance in sport and the dynamics of talent development. Sports Med. 40, 271–283. doi: 10.2165/11319430-000000000-00000

PubMed Abstract | CrossRef Full Text | Google Scholar

Pinder, R. A., Renshaw, I., and Davids, K. (2013). The role of representative design in talent development: a comment on “Talent identification and promotion programmes of Olympic athletes”. J. Sports Sci. 31, 803–806. doi: 10.1080/02640414.2012.718090

PubMed Abstract | CrossRef Full Text | Google Scholar

Plessner, H., Betsch, C., and Betsch, T. (eds) (2011). Intuition in Judgment and Decision Making. New York, NY: Psychology Press.

Google Scholar

Poincaré, J. H. (1913). The Foundations of Science (Translated). Lancaster, PA: The science press.

Google Scholar

Raab, M., and Johnson, J. G. (2007). Expertise-based differences in search and option-generation strategies. J. Exp. Psychol. Appl. 13, 158–170. doi: 10.1037/1076-898x.13.3.158

PubMed Abstract | CrossRef Full Text | Google Scholar

Roca, A., Williams, A. M., and Ford, P. R. (2012). Developmental activities and the acquisition of superior anticipation and decision making in soccer players. J. Sports Sci. 30, 1643–1652. doi: 10.1080/02640414.2012.701761

PubMed Abstract | CrossRef Full Text | Google Scholar

Sawyer, J. (1966). Measurement and prediction, clinical and statistical. Psychol. Bull. 66, 178–200. doi: 10.1037/h0023624

PubMed Abstract | CrossRef Full Text | Google Scholar

Schorer, J., Rienhoff, R., Fischer, L., and Baker, J. (2017). Long-term prognostic validity of talent selections: comparing national and regional coaches, laypersons and novices. Front. Psychol. 8:1146. doi: 10.3389/fpsyg.2017.01146

PubMed Abstract | CrossRef Full Text | Google Scholar

Silver, N. (2012). The Signal and the Noise: Why so Many Predictions Fail-but Some Don’t. New York, NY: Penguin.

Google Scholar

Simon, H. A. (1955). A behavioral model of rational choice. Q. J. Econ. 69, 99–118.

Google Scholar

Simon, S., Collins, L., and Collins, D. (2017). Observational heuristics in a group of high level paddle sports coaches. Int. Sport Coach. J. 4, 235–245. doi: 10.1123/iscj.2017-0012

CrossRef Full Text | Google Scholar

Simonton, D. K. (1999). Talent and its development: an emergenic and epigenetic model. Psychol. Rev. 106, 435–457. doi: 10.1037//0033-295x.106.3.435

CrossRef Full Text | Google Scholar

Slack, T., and Parent, M. M. (2006). Understanding Sport Organizations: The Application of Organization Theory, 2nd Edn, Campaign, IL: Human Kinetics.

Google Scholar

Slovic, P., Finucane, M., Peters, E., and MacGregor, D. G. (2002). Rational actors or rational fools: implications of the affect heuristic for behavioral economics. J. Sociol. Econ. 31, 329–342. doi: 10.1016/s1053-5357(02)00174-9

CrossRef Full Text | Google Scholar

Staw, B. M., and Hoang, H. (1995). Sunk costs in the NBA: why draft order affects playing time and survival in professional basketball. Adm. Sci. Q. 40, 474–494.

Google Scholar

Staw, B. M., and Ross, J. (1989). Understanding behavior in escalation situations. Science 246, 216–220. doi: 10.1126/science.246.4927.216

PubMed Abstract | CrossRef Full Text | Google Scholar

Surowiecki, J. (2004). The Wisdom of Crowds: Why the Many are Smarter than the Few and How Collective Wisdom Shapes Business. New York, NY: Random House.

Google Scholar

Swets, J. A. (1988). Measuring the accuracy of diagnostic systems. Science 240, 1285–1293. doi: 10.1126/science.3287615

PubMed Abstract | CrossRef Full Text | Google Scholar

Taleb, N. N. (2007). The Black Swan: The impact of the Highly Improbable. New York, NY: Random house.

Google Scholar

Tetlock, P. E. (2005). Expert Political Judgment: How Good is it? How Can We Know?. Princeton, NJ: Princeton University Press.

Google Scholar

Tetlock, P. E. (2016). Superforecasting: The Art and Science of Prediction. Canada: Penguin Random House.

Google Scholar

Toering, T. T., Elferink-Gemser, M. T., Jordet, G., and Visscher, C. (2009). Self-regulation and performance level of elite and non-elite youth soccer players. J. Sports Sci. 27, 1509–1517. doi: 10.1080/02640410903369919

PubMed Abstract | CrossRef Full Text | Google Scholar

Tromp, E. Y., Pepping, G. J., Lyons, J., Elferink-Gemser, M. T., and Visscher, C. (2013). “Let’s pick him!”: ratings of skill level on the basis of in-game playing behaviour in Bantam League Junior ice hockey. Int. J. Sports Sci. Coach. 8, 641–660. doi: 10.1260/1747-9541.8.4.641

CrossRef Full Text | Google Scholar

Tsay, C. J., and Banaji, M. R. (2011). Naturals and strivers: preferences and beliefs about sources of achievement. J. Exp. Soc. Psychol. 47, 460–465. doi: 10.1016/j.jesp.2010.12.010

CrossRef Full Text | Google Scholar

Tversky, A., and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science 211, 453–458. doi: 10.1126/science.7455683

PubMed Abstract | CrossRef Full Text | Google Scholar

Vaeyens, R., Lenoir, M., Williams, A. M., and Philippaerts, R. M. (2008). Talent identification and development programmes in sport. Sports Med. 38, 703–714. doi: 10.2165/00007256-200838090-00001

PubMed Abstract | CrossRef Full Text | Google Scholar

Vaeyens, R., Malina, R. M., Janssens, M., Van Renterghem, B., Bourgois, J., Vrijens, J., et al. (2006). A multidisciplinary selection model for youth soccer: the Ghent Youth Soccer project. Br. J. Sports Med. 40, 928–934. doi: 10.1136/bjsm.2006.029652

PubMed Abstract | CrossRef Full Text | Google Scholar

Västfjäll, D., Peters, E., and Slovic, P. (2014). The affect heuristic, mortality salience, and risk: domain-specific effects of a natural disaster on risk-benefit perception. Scand. J. Psychol. 55, 527–532. doi: 10.1111/sjop.12166

PubMed Abstract | CrossRef Full Text | Google Scholar

Vrljic, K., and Mallett, C. J. (2008). Coaching knowledge in identifying football talent. Int. J. Coach. Sci. 2, 63–81.

Google Scholar

Waldron, M., and Worsfold, P. (2010). Differences in the game specific skills of elite and sub-elite youth football players: implications for talent identification. Int. J. Perform. Anal. Sport 10, 9–24. doi: 10.1080/24748668.2010.11868497

CrossRef Full Text | Google Scholar

Wallsten, T. S., and Diederich, A. (2001). Understanding pooled subjective probability estimates. Math. Soc. Sci. 41, 1–18. doi: 10.1016/s0165-4896(00)00053-6

CrossRef Full Text | Google Scholar

Ward, P., and Williams, A. M. (2003). Perceptual and cognitive skill development in soccer: the multidimensional nature of expert performance. J. Sport Exerc. Psychol. 25, 93–111. doi: 10.1123/jsep.25.1.93

CrossRef Full Text | Google Scholar

Wattie, N., and Baker, J. (2017). “Why conceptualizations of talent matter: implications for skill acquisition and talent identification and development,” in Routledge Handbook of Talent Identification and Development in Sport, eds J. Baker, S. Cobley, J. Schorer, and N. Wattie (London: Routledge).

Google Scholar

Wells, G. D., Elmi, M., and Thomas, S. (2009). Physiological correlates of golf performance. J. Strength Cond. Res. 23, 741–750. doi: 10.1519/JSC.0b013e3181a07970

PubMed Abstract | CrossRef Full Text | Google Scholar

Williams, A. M., and Reilly, T. (2000). Talent identification and development in soccer. J. Sport Sci. 18, 737–750.

Google Scholar

Yaniv, I. (2004). Receiving other people’s advice: influence and benefit. Organ. Behav. Hum. Decis. Process. 93, 1–13. doi: 10.1016/j.obhdp.2003.08.002

CrossRef Full Text | Google Scholar

Keywords: talent, talent selection, talent wastage, selection bias, cognitive bias, decision-making

Citation: Johnston K and Baker J (2020) Waste Reduction Strategies: Factors Affecting Talent Wastage and the Efficacy of Talent Selection in Sport. Front. Psychol. 10:2925. doi: 10.3389/fpsyg.2019.02925

Received: 13 August 2019; Accepted: 11 December 2019;
Published: 10 January 2020.

Edited by:

Adelaida María Castro Sánchez, University of Almería, Spain

Reviewed by:

Andreas Ivarsson, Halmstad University, Sweden
Ruud J. R. Den Hartigh, University of Groningen, Netherlands

Copyright © 2020 Johnston and Baker. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kathryn Johnston, Krobinso@yorku.ca

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.