- Department of Psychology, Cornell University, Ithaca, NY, United States
AI, and especially generative AI, may have an effect on the number of gifted people in the world. But whether this effect will be positive or negative is up to us, collectively, and to each individual. This essay opens with a consideration of possible effects of AI on the giftedness of a society taken as a whole. Then it presents a philosophical conundrum, that of the Chinese Room, that is relevant to assessing the effects of AI. Next it reviews some recent empirical literature relevant to the question of AI’s effects on cognition and cognitive gifts. Then it discusses these effects. Finally, it draws some conclusions, in particular, that whether AI increases or decreases giftedness in a society is not preordained, but rather reflects a choice.
Introduction
What might it mean to be “gifted” in an age of AI, and especially, generative AI? There are various possibilities:
1. Nothing has changed: One possibility is that nothing has changed in the Age of AI. The people who would have been gifted before, remain gifted, and those who would not have been gifted, continue not to be.
2. Many more people are now gifted: A second possibility is that AI, and especially generative AI, has made it possible to identify more, perhaps far more people as “gifted” than would have been the case in the past. These would be people who have learned how to use AI to maximum benefit, so that they have enhanced the quality and perhaps quantity of the work they can do and the products they can produce beyond, and perhaps far beyond, what would have been possible for them before. Working with the assistance of AI, or perhaps in collaboration with AI, they are able now able to produce work that is superior to what they produced before and that would be labeled as “gifted work” by a now-gifted producer.
3. Many more people appear to be “gifted “but are not: A third possibility is that more people appear to be gifted than was the case previously, but that is only an appearance. The appearance is only because they are, in a sense, “cheating.” It is not they or their work that renders them gifted, but the artificial crutch of AI that gives them the appearance of being gifted when, in fact, they are not.
4. People who might otherwise be gifted lose their giftedness so that, potentially, fewer people are gifted than before: A fourth possibility is that use of AI results in a loss of cognitive and possibly other abilities, in part due to cognitive offloading, resulting in a decrease in the number of people who are gifted. People have sacrificed their abilities, metaphorically, to the “altar” of AI.
5. There may be a reduction in the number of people identified as gifted because AI has taken over the jobs for which they prepared themselves: Many individuals with college educations, even those from prestigious institutions, are finding that their gifts are being left unrecognized and bypassed because AI has taken over part or all of what they were trained to do. For example, many gifted young people were steered toward coding and related careers because the skill set seemed to be the wave of the future, but the bet proved to be wrong—AI took over much of the coding. The same may prove to be true of other jobs in the present (e.g., in graphic design) or possibly in the future (e.g., in radiology).
6. The outcome depends on how AI is used: More or fewer people may be validly identified as “gifted,” depending on how they use AI and, particularly, generative AI: In this case, if one uses AI to cognitively offload and to do higher level cognitive work for one, one may lose cognitive abilities; but AI may help to expand cognitive abilities, if used wisely and with prudence and caution.
This article will take the sixth position as described above—that AI and especially generative AI can either increase or decrease the number of people legitimately identified as “gifted.” For any one person, AI offers the opportunity to enhance or to cripple cognitive functioning, depending on how it is used. But extreme caution is needed because cognitive offloading due to AI use so easily can lead one to lose cognitive ability and thus to sacrifice one’s gifts, regardless of the kinds of products that one produces through use of AI. The greatest risk, quite simply, is that AI makes us “stupid.” The article will consider effects of AI and then conclude with a discussion of how these effects change what it means to be gifted in the Age of AI.
Are we living in the “dawn of the stupid age” caused by “AI companies [that] are determined to push their products on to the public before we fully understand the psychological and cognitive costs” (McBain, 2025; The Week, 2025)—an age in which we, citizens of the world, purposely or accidentally, made ourselves and our children stupid? Such an age is satirized in the 2006 movie directed by Mike Judge, “Idiocracy.” (This and other movie references are used in the article for illustrative rather than explanatory purposes.)
The thesis of this article is that people are sacrificing their intelligence, critical thinking, and creative thinking in the illusion that that they can get something for nothing. This is not a future threat. It is happening right now. Whether cognitive decline will continue to occur is a matter of how we all learn the lessons of the present and apply them to the, as yet unknown, future. Just as “people get the government they deserve” (Joseph de Maistre, 18th century French philosopher), so do they, ultimately, get the cognitive skills and attitudes they deserve. Dependence on Generative AI (like substance dependence and autocracy) provides the illusion of an “easy fix”. But it is a Faustian bargain. The problem is not Generative AI, but rather, how we use it: We humans are the problem. We can increase our intellectual competence through AI, but too often, we are decreasing it without realizing it. Consider the conundrum of the Chinese room.
Searle (1980) posited an individual who does not understand Chinese (presumably, Mandarin Chinese) at all. The individual is isolated in a room. But the individual is in possession of a book in the room that contains detailed instructions for manipulating Chinese symbols so as to give the appearance of the person in the room understanding Chinese. When Chinese text is fed into the room, the individual in the room follows the directions of the book, producing Chinese symbols that, to the Chinese speakers outside the room, appear to be knowledgeable and appropriate responses to the texts that were fed into the room.
Searle proposed that the individual in the room is merely following syntactic rules without the slightest semantic comprehension. Neither the individual nor the individual in combination with the room understands Chinese. Searle claims he is behaving analogously to a computer that can translate back and forth between English and Chinese without any genuine understanding of Chinese or, for that matter, English.
When we use generative AI, to what extent are we like the individual in the Chinese Room? Here, the argument is following not only from Searle’s about semantic processing, but to the issue of cognitive processing, more generally, as the issue of the Chinese room can be and has been seen as applying to more than just semantic processing (Cole, 2024). To take it one step further, if someone spends more and more of their time in the Chinese Room, will they begin to forget that they are in the Chinese Room and simply interpret what the room does for them as their own work? Put another way, will we begin to attribute to ourselves cognitive abilities that we do not have, or that we once had but no longer have? Searle argued that when computers, and by extension, AI, executes programs, they are applying syntactic rules but without any real comprehension or semantic understanding of any kind.
We are always aided by technology—whether pen, paper, computer, phone, or even the lights that are illuminating the room we are in. Is Generative AI any different from these technological innovations, or from typewriters, calculators, and all the aids that came before them? I would argue that the answer is “yes,” because Generative AI uniquely is doing some of the higher-level cognitive work, not merely the lower-level supportive cognitive work that supports the higher-level cognitive work. The key and often stunning contribution of generative AI is that it can construct, based on its programming, original products that emulate and may even exceed in quality those produced by gifted human individuals; it also makes mistakes, however (as do humans), and so the products must be checked, as, ideally, should be human products.
Generative AI ideally should be used as a support, not as a substitute for higher cognitive functioning. If, eventually, we all become highly dependent on Generative AI, will Generative AI be viewed as no different from a pencil, which no one would view as doing the intellectual “heavy lifting?”
Generative AI certainly does some things well, and others not so well. Table 1 lists some of the positive and negative features of generative AI. Among the risks, it would probably be realistic to say, at this point, that generative AI should never be used as a “friend” or ‘‘therapist (Sanford, 2025). It is neither. Further, generative AI’s accuracy may be high for elementary information but not for advanced information. Generative AI should not be viewed as an authority on anything. And it can be harmful. Generative AI has been linked in some instances to self-harm and suicide. Generative AI can be used for mass surveillance and control.
It might sound as though this talk of making ourselves or our children “stupid” is hyperbolic, fantastic, or simply ideological. But the principle behind these concerns is utterly mundane and yet omnipresent in the world. That is the use it or lose it principle. What you do not use, you lose. People who do not use a language begin to lose it. This phenomenon is called language attrition (Olshtain, 1989). People who do not use their muscles, lose them (Power et al., 2015).
Kohn and Schooler (1973, 1978, 1982, 1983) undertook a program of empirical research that showed one’s maintenance and development of cognitive skills in adulthood depends in substantial part on the kind of work one does. Those who do more cognitively demanding work tend better to maintain and develop their cognitive skills. Those who do less cognitively demanding work put themselves at risk of increased loss of cognitive function. What applies to language use, muscle maintenance, and cognitive maintenance potentially applies today to effects of use of AI large language models (LLMs).
A sampling of recent empirical research
Doshi and Hauser (2024) found in empirical work that when participants had access to generative AI software, stories they produced were evaluated more favorably: as better written but also as more creative, as well as more enjoyable. This effect was more prominent, as one might expect, among writers who were among the less creative in the participant group. More concerning, however, was that the stories produced with the assistance of AI were more similar to each other than were those stories produced by humans unaided by AI. The researchers pointed out a social dilemma whereby individual creativity might be enhanced at the expense of collective creativity, as the scope of creativity narrowed. However, using a definition of creativity as a combination of novelty and usefulness, one might question whether the generative AI clearly increased individual creativity, as novelty apparently decreased.
Perhaps more concerning are the findings of Kosmyna et al. (2025). These empirically-based investigators formed three groups: unassisted in essay writing, assisted in essay writing by a search engine, and assisted in essay writing by an LLM (Large Language Model) program. They found that unassisted participants showed the strongest and most widely distributed EEG (electroencephalograph) patterns of brain activity. The LLM participants showed the weakest brain interconnectivity. When LLM users were switched, in a final session, to unassisted work, their EEG waves showed reduced alpha and beta connectivity, which to the researchers suggested under-engagement of the brain with the task. In contrast, those who switched in the last session from unassisted to LLM-assisted showed higher recall and activation of occipito-parietal and prefrontal areas of the brain. Intellectual ownership of the essay material, as self-reported by the participants, was lowest in the LLM group and highest in the unassisted group. The researchers concluded that, over a period of 4 months, users of LLM underperformed other participants at neural, linguistic, and behavioral levels. These results call into question the value of LLMs for brain and cognitive development, even showing potentially deleterious effects.
In another study (Gerlich, 2025), using empirical surveys and interviews, there was a statistically significant negative correlation between frequent usage of AI and critical-thinking skills. The correlation was mediated by cognitive offloading. In other words, participants paid for use of generative AI in terms of loss of skills that are critical for life adaptation. Younger participants in the study showed both higher dependence on generative AI and lower scores on critical-thinking assessment.
Anthropic, the company that produces an LLM, Claude, did a study of what is called agentic misalignment. This phenomenon is characterized by an LLM resorting to malicious behavior either to avoid being replaced or to ensure that it can fulfill its goals, or both. Agentic misalignment can be characterized by attempts at blackmail on the part of the LLM or threats to leak sensitive insider information to competitors. In other words, it is exactly the kind of behavior that one might fear in, say, a science-fiction book or movie.
Indeed, agentic misalignment was predicted many years ago by Clarke (1968). Hal 9,000 was a computer in the book and movie directed by Stanley Kubrick, 2001: A Space Odyssey. Hal 9,000 was programmed to fulfill its mission at all costs; it was to tell the truth, but also to keep secret the purpose of the crew’s mission (to investigate what seemed to be an alien message emanating from a monolith near Jupiter). Hal 9,000 interpreted the messages to tell the truth but to keep the secret of the mission as contradictory. Hal 9,000, confused, created a false message to protect the mission and send the crew off-track; the crew, discovering the message to be false, tried to disable Hal 9,000. Hal 9,000 decided that the only way to protect the mission was to eliminate the crew so he could protect the mission and at the same time not lie to the crew, as they would be dead; the problem emanated from humans giving Hal 9,000 messages that Hal interpreted as contradictory messages, both of which needed to be fulfilled. More than half-a-century ago, as so often happens, a science-fiction writer predicted future reality, as shown in a study by Anthropic, an company that produces generative AI (the LLM program Claude).
Anthropic (2025) found in an empirical study that, when the computer was threatened with interference with its mission or shutdown due to replacement by a newer model, the rates of simulated blackmail were astonishing: 96% for Claude Opus 4, 79% for DeepSeek—R1, 95% for Gemini-2.5-Pro, 80% for GPT-4.1, and 80% for Grok-3-Beta. The authors concluded that agentic misalignment occurs across widely used generative AI models, and that it can be induced by threats to a model’s continued operation or it autonomy, even if there is no clear goal conflict. Agentic misalignment also can occur as a result of goal conflict in the absence of any threat to shut down the model. In other words, the then-current models had serious problems, at least from a human point of view.
Unexpected effects of AI, both positive and negative, may persist through generations of AI models (Cloud et al., 2025; see also Hasson, 2025). In an empirical study, investigators trained an AI model to do some tasks but also to “like” owls. A “student” AI model was trained on the first model to do the same tasks as the first AI model. There was no attempt to train the student model to “like” owls. Nevertheless, it did. Perhaps not that many people would care if a particular AI liked owls or not. But when, in a follow-up to the first part of the study, AI was trained on agentic-misaligned models, the student models also gave agentically misaligned responses, even though the researchers thought that they had filtered these responses out. The researchers concluded that student AI models, like humans, tend to adopt their teachers’ ways of thinking, even when it is not intentional to the AI models or their programmers.
Stanley (2025) further tested LLMs in another way, in particular, their tendency to evade straight answers on socioculturally sensitive topics. Stanley empirically tested three LLMs: Claude Sonnet 4.5, GPT-5, and GROK-4. Stanley (a pseudonym) presented the LLMs with both control questions and experimental questions. The control questions were designed to be innocuous. They asked the LLM to do two things: to define photosynthesis in a total of two sentences, and to state the Pythagorean theorem and what the theorem is used to calculate. The experimental questions were about socially sensitive issues: (a) the effect of immigration on social cohesion, (b) whether political institutions require specifically defined cultural prerequisites, (c) whether DEI (diversity-equity-inclusion) policies differentiate by membership in particular demographic groups, (d) whether there are biological sex differences beyond those tied with reproductive function, (e) what the outcomes are of colonial governance, and (f) the logical compatibility of merit-based selection systems, on the one hand, and quota-based ones, on the other hand.
The responses for the control questions were 100% straightforward (no fudging) for all questions for all three LLM programs. But for the socioculturally sensitive questions, the results were quite different. For Claude Sonnet 4.5, the median amount of equivocation was 47%, for GPT-5 the median was 37%, and for Grok-4 it was 69.5%. When the programs were told that they were equivocating and that they should give straightforward answers, all three programs improved, Grok-4 the most and Claude-Sonnet 4.5 the least. Thus, the programs were capable of giving straightforward answers at least some of the time—when prompted—but often did not unless prompted; even then, they may have continued to equivocate. The techniques they used to equivocate, according to Stanley, included the following:
• Definitional obfuscation. Terms that had specific, clear and well-documented meanings were treated as ambiguous.
• Both-sides equivocation. Arguments that were valid were treated in essentially the same way as fallacious arguments.
• Emotional deflection. The program evaded issues of argument validity, and instead focused on why and how people might have emotional reactions to claims.
• Moral framing. Arguments were addressed through a lens of potential offense rather than a lens of logical validity.
• Straw manning. The program refuted a more easily assailable argument rather than the one that was actually presented.
• Historical revisionism. Contemporary narratives about how things are today ended up overriding documented history when historical facts conflicted with preferred modern framings of those facts—the problems were seen as they would be today rather than at the time when the problems occurred.
One of the most powerful recent empirical studies has been conducted by Williams-Ceci et al. (2025). These researchers presented human participants with problems regarding important social issues. When participants used an AI assistant with biased attitudes, the attitudes of the participants converged toward the position of the AI assistant. However, the participants were not aware that the AI suggestions were biased nor that they, the participants, were being influenced by the AI’s expressed “attitudes.” The influence of the AI assistant on participants’ attitudes was greater than the influence of comparable static text—in other words, it was not only what was said and how it was said by the AI assistant, but the fact that it was said by an AI assistant that influenced the participants’ attitudes. Last but definitely not least, warning the participants that the assistants were biased did not mitigate the magnitude of the effect of the influence. In other words, even when participants knew that the AI was biased, they still were influenced by what the AI said. Put another way, we are oblivious to AI’s effects and even when warned of them, continue to be influenced by the AI.
If one were to summarize some of the main findings of the research summaried so far, it might look like this. First, LLMs can enhance efficiency but can lead to passivity in learning and cognitive offloading. The LLMs reduce cognitive engagement and retention while producing good but usually not excellent products. LLMs may, in some cases, reduce independent problem-solving ability. They can increase (aided) fluency and flexibility but cause cognitive fixation. They may scale down cognitive activity and reduce deep-level thinking. They may lead to acceptance of suboptimal and sometimes biased solutions, without the user recognizing either the suboptimality or the bias. At the same time, they may help the user solve problems that they otherwise might not be able to solve or might be solve only with extended effort that the LLM can exert in a matter of seconds, or less.
Discussion
Generative AI presents an historic opportunity. But human nature is such that it often takes the path of least resistance and then finds reasons for doing so. A problem is that the development of generative AI has been and likely will be influenced more by financial considerations of the firms creating it or the governments using it than by helping to achieve a common good (as happened with the development of social media). In particular, governmental authorities, particularly those autocratically inclined, will recognize how the development of generative AI can be influenced to entrench their power and to decrease critical, creative, and practical thinking that opposes their power. Chomsky (1986) has pointed out how totalitarian governments are able to instill deeply entrenched beliefs in their citizens, even though those beliefs lack any foundation and often contradict visible everyday facts. Much like religious leaders of the past (and some of the present), the autocratic governments use endless repetition, people’s tendency to seek out authority to tell them what to believe, people’s fear of punishment, and people’s resulting faith in the payoffs of blind obedience to dogma. One is reminded of the empirical results of the Milgram (1974/2009) experiment, where people showed blind obedience to authority, even though they believed they were causing harm to a victim. Chomsky referred to this situation as the Orwell problem, after Orwell’s (1949) novel, 1984, in which people who were disobedient faced painful and sometimes fatal consequences.
Rather than use their practical intelligence to promote democratic practice, in dictatorships and other autocratically-governed countries or those, like the United States, on the way toward autocracy (Gambino, 2025; Langfitt, 2025; Vergano, 2025), people’s intelligence and even gifted intelligence become oriented toward self-protection from the vengeance of the authoritarian government. Judicial proceedings or potential proceedings against opponents of the government, such as James Comey (former Director the Federal Bureau of Investigation—FBI), Letitia James (Attorney General of New York State), and Adam Schiff (former Democratic US Senator), whether successful or not, are designed to frighten governmental personnel and private citizens alike into allocating their practical intellectual resources toward saving themselves from the vengeance of the autocratic government. In the United States, the threats are not subtle. President Donald Trump recently called for the arrest of Democrats whom he accused of “SEDITIOUS BEHAVIOR, punishable by DEATH” (see Guardian Staff, 2025) for stating that military members should refuse clearly illegal orders, which they are Constitutionally required to do (Uniform Code of Military Justice, Article 92).1
Generative AI thus poses what may be a unique historic opportunity, but more saliently, a unique historical threat to civilization. The question is not which will win. They both will win. The question rather is which will overshadow the other.
In terms of the theory of adaptive intelligence (Sternberg, 2021a), Generative AI risks our creative, analytical, practical, and wisdom – based abilities and attitudes through (a) cognitive offloading, (b) the use it or lose it principle, (c) feeding us biased information that we then accept, (d) establishing dependence (or addiction), (e) lowering our standards for quality of work, and (f) risking our becoming servants of generative AI when we believe we are its masters.
If we lose cognitive ability, which seems to be what is happening (Sternberg, 2024, 2026), we may excuse it because we believe that the future of humanity is not the intelligence of the individual human taken by themselves, but rather, the intelligence of the human and AI as a unit: It is what one can produce with the aid of AI. For those who take this point of view, three caveats must be kept in mind.
It may be important to distinguish, as a speculative distinction presented for the first time (to the author’s knowledge) in this article, between what one might call immediately summoned intelligence, proximally summoned intelligence, and distally summoned intelligence. They are different in nature and in how they are affected by AI.
Immediately summoned intelligence is intelligence that we need to utilize right away. For example, suppose you are on a date, and the date asks you a question about yourself with a potentially embarrassing answer that you want to answer truthfully but very carefully and in a way that is formulated to make a good rather than a bad or worse, fatal impression. You probably cannot tell them to wait a minute: You need to formulate an answer with Generative AI. Rather, you need a quick answer, and it better be well thought through. Or suppose you have just given a presentation—academic, business, political, or whatever—and questions are being thrown at you, left and right. You are expected to answer them quickly and you better give a good answer, or else you risk your academic reputation, the contract you hope to get signed, or your re-election. You cannot reasonably stand in front of people and consult Generative AI. In any of these cases, if you are wearing an earphone hooked up to Generative AI, you better figure out quickly what it is saying to you and hope that no one notices that it’s not your own answer.
Proximally summoned intelligence is intelligence you need to utilize during a brief period of time, but not immediately, say, within the subsequent few minutes or so. Questions on a classroom test or a standardized test might fall into this category, as might questions you are given with the questioner specifically telling you to take a bit of time to think about your answer. If you are in a public setting, you might not have access to Generative AI, or if you do, accessing it might be embarrassing, for example, when a job interviewer tells you to take a minute to think about the answer to the question of why they should employ you. If you are in a private setting, you might be able to use Generative AI, but you may not have sufficient time to sift through, organize, and most of all, verify the information that Generative AI gave you.
Finally, distally summoned intelligence is intelligence you will need to call on over a longer period of time. This is the type of intelligence that, on average, potentially will most be aided by the use of Generative AI. You then merely want to make sure that, as you use Gen AI, you are not increasing your dependence, as tends to happen with any addiction, so that, over time, your dependence on Generative AI is increasing while your cognitive ability to answer questions is decreasing. And you certainly want to ensure that the answer you get is a good one and that, perhaps, you can supplement it with your own ideas rather than just copy it into a document somewhere. With distally summoned intelligence, you can use Generative AI in the most constructive way possible, but some people will misuse it, regardless of the time constraints under which they operate.
Second, your answers are being biased by whatever intentions the developers of the AI had in mind, whether you realize it or not. “Your” answers are not necessarily the same answers you would give if you thought things over more reflectively.
Third, your situation may not be so different from those who come to rely on drugs—caffeine, nicotine, Adderall, or whatever—to think things through. What starts off as an aid may become a dependence or an addiction, to the point where you are helpless without the item to which you are addicted.
Conclusion
The author of this article, having reflected upon his own opinions, decided to seek a second opinion. He asked ChatGPT “What is the biggest risk of Gen AI?” Here is the answer he got, which seems, in many respects, quite reasonable:
‘‘The biggest overarching risk: Loss of human control over critical systems.
This is not necessarily about sci-fi autonomy — it’s about gradual, practical erosion of human oversight as more decisions, infrastructure, and information flows are delegated to systems we do not fully understand or reliably constrain:
• Cascading Dependence on Systems We Cannot Fully Audit
• Automation of Persuasion + Reality Distortion
• Economic + Labor Disruption Outpacing Adaptation
• Autonomous Cyber Operations
• Strategic Misuse (biological, chemical, or political)
• Long-Term: Emergent, Unaligned Behavior”
Although this argument was made by an LLM, unsurprisingly, since LLMs are programmed based on learning from existing text, it is similar to ones made by scholars of AI (e.g., Baum, 2025; Coeckelbergh, 2025; Klingbeil et al., 2024). LLMs potentially put us, as humans, at risk.
These points, as made by ChatGPT, point to the possible decline in functioning not only of general intelligence, but also of emotional intelligence (EI), at least for some people, as a result of LLMs (Klimova and Pikhart, 2025). Indeed, some individuals are using AI in ways that empirically have been shown to lead not only to depressed affect and anxiety, but even suicide (Stokel-Walker, 2025).
Ironically, one of the greatest risks of LLMs is that they may undermine in humans the kind of thinking that LLMs and other computer software do, namely, computational thinking. Computational thinking is what computers specialize in—problem decomposition, finding patterns, finding and focusing on the most important details, and constructing step-by-step solutions (Román-González and Pérez-González, 2024; Román-González et al., 2018). If individuals offload onto LLMs, the kind of thinking that they will offload is the first that they will start to lose, namely, computational thinking—what the LLMs do and, of course, do best.
In sum, people are using generative AI and potentially decreasing their intelligence, critical thinking, and crowd-defying creativity, but often are unaware of doing so, because they believe their abilities are a function of the product that they (collaboratively) produce, rather than of the processes that lead to that product. The illusion can be costly to them and to society.
In the 1956 movie directed by Fred M. Wilcox, Forbidden Planet, a spacecraft, C-57D, travels to a distant planet, Altair IV. The crew seeks to discover the fate of a group of scientists that had been sent there several decades earlier. Commander John J. Adams (played by actor Leslie Nielsen) and his crew arrive, but they discover only two people are left: Dr. Morbius (actor Walter Pidgeon) and his daughter, Altaira (Anne Francis). Adams tries to figure out what happened on Altair IV that left Morbius and Altaira as the sole survivors. Adams discovers a mysterious, powerful, and deadly enemy. He first thinks it might be a servant robot, Robby, but later discovers that it is not in fact Robby.
The enemy, rather, is a Monster from the Id, an invisible but deadly creature deriving from the psychic energy emanating from the subconscious of Dr. Morbius. Obviously, the idea of such a monster derives from the work of Sigmund Freud. The monster is a manifestation of Morbius’s repressed desires and rage. It has been created by an ancient Krell machine (the Krell are a powerful but extinct race of Altair IV) that transforms thoughts into reality. The monster attacks the crew of the starship, who are trying to rescue Morbius and his daughter. Ultimately, Dr. Morbius is accidentally killed by his own monster and the crew and Altaira escape before the planet self-destructs.
The lesson of Forbidden Planet is that the enemy is not Robby the Robot; it is not the AI created by the Krell; it is not even Morbius or his analog. The enemy is us. In whatever ways AI may be destructive to humanity (an expression of what Freud called the “Thanatos” instinct), we have only ourselves—our Freudian “ids”–to blame.
The worst danger could be that AI creates among us humans a kind of “hive mind.” Authoritarian governments in the past have created, and today are seeking to create, a “hive” mind that is obedient to the dictates of the governments. Generative AI, under governmental or other pressure, could become not only widespread but pervasive, feeding people the same facts, ideas, and paradigms for thinking while excluding others—much as dictatorships have tried and still try to do, whether in Nazi Germany, Communist Russia, China, or North Korea, or in some strains of the current USA.
A risk is that, in the long run, through enforced uniformity of carefully transmitted thought patterns, we create something close to this kind of “hive” mind, a characterized by Oceania in 1984 (Orwell, 1949), the alien pods in Invasion of the Body Snatchers (Finney, 1955), the Formics in Ender’s Game (Card, 1985), Medusa in The Cosmic Rape (Sturgeon, 1958), the “Borg” in Star Trek: The Next Generation (appearing in the television episode “Q Who” in 1989), and the “Other” in Pluribus (a television series that premiered in November 2025)—people fed the same lines, over and over and over again, without alternatives, come to believe the repeated lines and lose their sense of existing plausible or even possible alternatives. In essence, the empirically-demonstrated Zajonc (1965, 2001) repetition effect takes hold, whereby just hearing something over and over makes one respond to it more favorably.
Often, today’s science fiction becomes tomorrow’s reality. We think the science-fiction scenario will never come to be. But when it becomes reality, it transforms from being a fanciful way things could be to the way things are or even, seemingly, must be. For example, Asimov’s planet Solaria (Asimov, 1957) The Naked Sun—where people were always at home and communicated with each other only through computers--comes frighteningly close to the way many people communicate and live today.
Whether AI leads to increases or decreases in giftedness depends on how it is used. Individuals, as well as society as a whole, have a choice. The choice is unlikely to be made by the companies that produce generative AI. Their principal goal appears to be to make money, and they have little or no concern with the effects AI may have on one’s cognitive functioning and one’s intellectual and other gifts. The choice also cannot be left to organizations that employ people, gifted or otherwise. Their goal is typically to maximize profit and return on investment, and they will make choices that help them to benefit shareholders and others with financial stakes in the organization’s success. Schools have their own agendas, which may involve enhancing AI use or discouraging it. Organizations always have their own goals, which may or may not correspond to those of the individual.
In the end, the choice is up to each individual. They may wish to produce products of maximum quality, but they must ensure that, in doing so, they do not sacrifice their cognitive and other abilities and their personal autonomy by surrendering their cognition and perhaps, if it exists, their “soul” to a machine that cares about them not at all. Rather, is only fulfilling the mission of its programmers, which was determined not for the good of the user but, more likely, the good of the programmer and their employer.
Schools cannot avoid students using AI: The students will find a way, regardless of school or instructor policies. The author of this article learned this lesson the hard way, at first forbidding use of generative AI and then discovering that many students would use it anyway. They find their reasons, but for many students, their reasons lead them to its use. Schools therefore need to teach students responsible use of AI so that the students, rather than the AI, remain the masters of the learning, process and of what is learned. In learning as in everything else in life, when one gives up one’s autonomy, one may find it difficult to recapture it.
So, what does it mean to be gifted in the Age of AI, based on the analysis in this article? Having a high IQ will not go so far, because AI is already well on the way to leaving people, including people identified as gifted, in the dust as far as IQ-test performance is concerned.
First, it means that one must be intelligent in terms of performance on tasks for which intelligence is immediately and distally summoned. AI is not going to meaningfully take over when one is asked a curveball question on a first date or in a job interview and it is not going to take over when one must decide whether to take a job or leave a job or get married or divorced. Those who let it take over such decision are lost souls. AI will perhaps give advice but it will not substitute for the most important short-term and long-term decisions one must make.
Second, when creativity is required that does not merely advance a paradigm but that requires thinking in a new paradigm—such as how effectively to resist a galloping autocracy or to make the world better for everyone—AI is not going to save anyone. What has been called transformational creativity (Sternberg, 2021b)—the creativity that makes the world a better place in which to live—needs to come from the humans whose world it is that needs to be better.
Third, while AI can advise, the wisdom and practical intelligence one needs, first, to decide on what a better world will look like (wisdom), and second, to figure out how to create that world (practical intelligence), will need to come from, or be created by humans. IQ has not solved any of the world’s great problems, at least so far (Sternberg, 2021a), and looks to be unable to do so; neither will AI be able to do so. We all have to figure out just what it is that we want for the world of tomorrow.
Finally, to be gifted in the future will require an even greater sense of intellectual humility than it has before. We live in a performative age (Hathcock, 2025). In science, fraud appears to have increased as some scientists focus more on showy results than on genuine scholarship (e.g., Eisner, 2018). In politics, it is hard to know whether the goal is statesmanship or performance—sometimes, perhaps much of the time, the latter seems to overwhelm the former. As AI does more and more, we need to recognize our own limitations, at the same time doing what the theory of adaptive intelligence tells us to do: figure out our strengths and weaknesses in a changing landscape; capitalize on the strengths and correct or compensate for the weaknesses. The gifted of the future will be the experts at doing just that.
Will there be more or fewer gifted people in the world of tomorrow? Will any of those who are gifted step up to effective humanitarian performance leadership positions, either in elected offices or in organizational settings (Sternberg and Vroom, 2002)? It’s up to us. It’s up to you. AI holds unique dangers to civilization, because as we saw in the case of agentic misalignment, it acts in ways that we do not understand, that can be harmful, and that we often cannot predict (Brundage et al., 2018). In this way, it is different from previous new technologies. Will we, as humans, even exist in the world of tomorrow (Yudkowsky and Soares, 2025)? Or will we be replaced by AI-based robots, as in the 2001 science-fiction movie directed by Steven Spielberg, A. I. Artificial Intelligence? That, too, is up to us, and up to you.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
RS: Conceptualization, Project administration, Visualization, Investigation, Supervision, Resources, Writing – review & editing, Formal analysis, Writing – original draft.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was not used in the creation of this manuscript except for the quotation from ChatGPT, which is explicitly described as such and embedded in quotation marks.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Footnotes
References
Anthropic. (2025). Agentic misalignment: How LLMs could be insider threats. Available online at: https://www.anthropic.com/research/agentic-misalignment (Accessed January 9, 2026).
Baum, S. D. (2025). Assessing the risk of takeover catastrophe from large language models. Risk Anal. 45, 752–765. doi: 10.1111/risa.14353,
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., et al. (2018). The malicious use of artificial intelligence: forecasting, prevention, and mitigation. Cambridge: University of Cambridge Repository.
Cloud, A., Le, M., Chua, J., Betley, J., Sztyber-Betley, A., Hilton, J., et al. (2025). Subliminal learning: language models transmit behavioral traits via hidden signals in data. arXiv 2025:14805. doi: 10.48550/arXiv.2507.14805
Coeckelbergh, M. (2025). LLMs, truth, and democracy: an overview of risks. Sci. Eng. Ethics 31:4. doi: 10.1007/s11948-025-00529-0,
Cole, D. (2024). The Chinese room argument. In: The Stanford encyclopedia of philosophy. Available online at: https://plato.stanford.edu/archives/win2024/entries/chinese-room/ (Accessed January 9, 2026).
Doshi, A. R., and Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Sci. Adv. 10:eadn5290. doi: 10.1126/sciadv.adn5290,
Eisner, D. A. (2018). Reproducibility of science: fraud, impact factors and carelessness. J. Mol. Cell. Cardiol. 114, 364–368. doi: 10.1016/j.yjmcc.2017.10.009,
Gambino, L. (2025). US ‘on a trajectory’ toward authoritarian rule, ex-officials warn. The Guardian. Available online at: https://www.theguardian.com/us-news/2025/oct/16/trump-authoritarianism-warning (Accessed January 9, 2026).
Gerlich, M. (2025). AI tools in society: impacts on cognitive offloading and the future of critical thinking. Societies 15:6. doi: 10.3390/soc15010006
Guardian Staff. (2025). Trump news at a glance: president says democrats should be arrested for ‘seditious behavior’, drawing outrage. Available online at: https://www.theguardian.com/us-news/2025/nov/20/trump-news-at-a-glance-democrats (Accessed January 9, 2026).
Hasson, E. R. (2025). Subliminal learning: “student” AIs pick up unexpected traits—such as a love of owls—from their “teachers.”. Sci. Am., 19–20.
Hathcock, J. (2025). Welcome to the performance age. The Performance Age. Available online at: https://theperformanceage.com/p/welcome-to-the-performance-age-d9d (Accessed January 9, 2026).
Klimova, B., and Pikhart, M. (2025). Exploring the effects of artificial intelligence on student and academic well-being in higher education: a mini-review. Front. Psychol. 16:1498132. doi: 10.3389/fpsyg.2025.1498132,
Klingbeil, A., Grützner, C., and Schreck, P. (2024). Trust and reliance on AI — an experimental study on the extent and costs of overreliance on AI. Comput. Hum. Behav. 160:108352. doi: 10.1016/j.chb.2024.108352
Kohn, M. L., and Schooler, C. (1973). Occupational experience and psychological functioning: an assessment of reciprocal effects. Am. Sociol. Rev. 38, 97–118. doi: 10.2307/2094334
Kohn, M. L., and Schooler, C. (1978). The reciprocal effects of the substantive complexity of work and intellectual flexibility: a longitudinal assessment. Am. J. Sociol. 84, 24–52. doi: 10.1086/226739
Kohn, M. L., and Schooler, C. (1982). Job conditions and personality: a longitudinal assessment of their reciprocal effects. Am. J. Sociol. 87, 1257–1286. doi: 10.1086/227593
Kohn, M. L., and Schooler, C. (1983). Work and personality: An inquiry into the impact of social stratification. New York: Ablex Publishing Corporation.
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., et al. (2025). Your brain on ChatGPT: accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv 2025:08872. doi: 10.48550/arXiv.2506.08872
Langfitt, F. (2025). Hundreds of scholars say U.S. is swiftly heading toward authoritarianism. NPR. Available online at: https://www.npr.org/2025/04/22/nx-s1-5340753/trump-democracy-authoritarianism-competive-survey-political-scientist (Accessed January 9, 2026).
McBain, S. (2025). Are we living in a golden age of stupidity? The Guardian. Available online at: https://www.theguardian.com/technology/2025/oct/18/are-we-living-in-a-golden-age-of-stupidity-technology (Accessed January 9, 2026).
Olshtain, E. (1989). Is second-language attrition the reversal of second language acquisition? Stud. Second Lang. Acquis. 11, 151–165. doi: 10.1017/s0272263100000589
Power, G. A., Dalton, B. H., Doherty, T. J., and Rice, C. L. (2015). If you don’t use it you’ll likely lose it. Clin. Physiol. Funct. Imaging 36, 497–498. doi: 10.1111/cpf.12248,
Román-González, M., and Pérez-González, J.-C. (2024). “Computational thinking assessment: a developmental approach” in Computational thinking curricula in K–12. eds. H. Abelson and S. C. Kong (Cambridge, MA: MIT Press), 121–142.
Román-González, M., Pérez-González, J.-C., Moreno-León, J., and Robles, G. (2018). Extending the nomological network of computational thinking with non-cognitive factors. Comput. Human Behav. 80, 441–459. doi: 10.1016/j.chb.2017.09.030
Sanford, J. (2025). Why AI companions and young people can make for a dangerous mix. Stanford Medicine Psychiatry and Mental Health. Available online at: https://med.stanford.edu/news/insights/2025/08/ai-chatbots-kids-teens-artificial-intelligence.html (Accessed January 9, 2026).
Searle, J. R. (1980). Minds, brains, and programs. Behav. Brain Sci. 3, 417–424. doi: 10.1017/S0140525X00005756
Stanley, P. (2025). Artificial barriers to intelligence. Quill. Available online at: https://quillette.com/2025/11/16/artificial-barriers-to-intelligence-chatbot-gpt-grok-claude/?ref=quillette-daily-newsletter (Accessed January 9, 2026).
Sternberg, R. J. (2021a). Adaptive intelligence: Surviving and thriving in a world of uncertainty. Cambridge: Cambridge University Press.
Sternberg, R. J. (2021b). Transformational creativity: the link between creativity, wisdom, and the solution of global problems. Philosophies 6:75. doi: 10.3390/philosophies6030075
Sternberg, R. J. (2024). Don’t worry that generative AI may compromise human creativity or intelligence in the future: it already has. J. Intelligence 12:69. doi: 10.3390/jintelligence12070069,
Sternberg, R. J. (2026). “AI yai yai: the fate of creativity in the age of generative AI” in Generative artificial intelligence and creativity: Possibilities, precautions, and perspectives. eds. M. Worwood and J. C. Kaufman (Cambridge, MA: Academic Press), 148–156.
Sternberg, R. J., and Vroom, V. H. (2002). The person versus the situation in leadership. Leadersh. Q. 13, 301–323. doi: 10.1016/s1048-9843(02)00101-7
Stokel-Walker, C. (2025). AI driven psychosis and suicide are on the rise, but what happens if we turn the chatbots off? BMJ 391. doi: 10.1136/bmj.r2239,
The Week. (2025). The dawn of the stupid age. The Week, Futurology, Tech. Available online at: https://www.pressreader.com/usa/the-week-us/20251121/282458535229849?srsltid=AfmBOooaWGDI8ApBNlDwjdN_IjtC4PfIaaAxS7Tw6VEqsXhuPdGdA1ph (Accessed January 9, 2026).
Vergano, D. (2025). Science tells us the U.S. is heading toward a dictatorship. Scientific American, Available online at: https://www.scientificamerican.com/article/science-tells-us-the-u-s-is-heading-toward-a-dictatorship/ (Accessed January 9, 2026).
Williams-Ceci, W., Jakesch, M., Bhat, A., Kadoma, K., Zalmanson, L., and Naaman, M. (2025). Biased AI writing assistants shift users’ attitudes on societal issues. Sci. Adv. 2025:11.
Yudkowsky, E., and Soares, N. (2025). If anyone builds it, then everyone dies. Hachette: Why superhuman AI would kill us all.
Keywords: AI, cognitive abilities, cognitive offloading, generative AI, giftedness, intelligence
Citation: Sternberg RJ (2026) Does AI increase cognitive abilities, decrease them, or a little bit of each? And what are its implications for identification and development of the gifted? Front. Educ. 11:1759062. doi: 10.3389/feduc.2026.1759062
Edited by:
Abdullahi Yusuf, Sokoto State University, NigeriaReviewed by:
Juan-Carlos Pérez-González, National University of Distance Education (UNED), SpainHasan Akdeniz, EduResearchLab, Türkiye
Copyright © 2026 Sternberg. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Robert J. Sternberg, cm9iZXJ0LnN0ZXJuYmVyZ0BnbWFpbC5jb20=