OPINION article
Front. Soc. Psychol.
Sec. Attitudes, Social Justice and Political Psychology
This article is part of the Research TopicThe Rise of Extremism in Democratic SocietiesView all articles
Social media, AI, and the rise of extremism during intergroup conflict
Provisionally accepted- University of Cambridge, Cambridge, United Kingdom
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
The internet is often considered an amplifier of extremism (e.g. Binder & Kenyon, 2022;Mølmen & Ravndal, 2023). While social media offers unique opportunities for cross-cultural exchange (Yuna et al., 2022), it is increasingly associated with echo chambers and polarization (Cinelli et al., 2021). Skeptics may counter that extremism predates the digital age-after all, the rise of Nazism unfolded without algorithms, and the degree to which social media has a causal role in extremism is debated (e.g. Shaw, 2023). However, this should not overlook the distinctive amplification power of today's social media algorithms, which repeatedly promote divisive content (Rathje et al., 2021;Milli et al., 2025). In this article, we argue that exposure to such content can drive radicalisation, especially among youth (Nienierza et al., 2019), either by introducing psychologically vulnerable individuals to extremist propaganda or by strengthening links between existing radical beliefs and political violence (Pauwels & Hardyns, 2018). We illustrate this through the cases of ISIS's use of social media (Awan, 2017) and Russian influence operations (Cosentino, 2020). These examples were selected because (a) they represent highprofile cases of how social media is used to breed extremismfoot_0 and (b) to illustrate how both state and non-state actors exploit the digital sphere for extremist agendas in Western democracies in distinct, yet related, ways. Further, we argue that emerging AI technologies exacerbate these threats in potentially unprecedented ways. Finally, we consider the potential of inoculation (McGuire, 1964) as an intervention against online extremism (Saleh et al., 2023). In an analysis of some 6,000 individuals across Arab countries, Piazza and Guler (2021) found that individuals using the internet for political news were more likely to support ISIS.Though the direction of causality remains unclear: individuals already sympathetic to ISIS may engage in confirmation bias (Klayman, 1995;Modgil et al., 2024)-especially in the restricted media contexts of Arab countries, where social media has long been utilised by dissidents (Wolfsfeld et al., 2013). These pathways are not mutually exclusive either: social media has the capacity both to seed extremist beliefs and reinforce existing ones (see Figure 1), both of which may increase support for political violence (Hassan et al., 2018;Pauwels & Hardyns, 2018) Figure 1 A dual-pathway model of social media-based radicalisation Note: We recognize that causality can be bi-directional in mutually reinforcing loops, e.g., when newfound support for extremism is reinforced by later (algorithmic) exposure to extremist content on social media. Similarly, existing support for political violence can lead people to seek out extremist content which further solidifies violent intent.The apparent success of ISIS is perhaps unsurprising given their well-established propaganda strategies (Lieberman, 2017). They were early adopters of YouTube and their online presence spanned deep-web magazines, violent high-definition videos, and exploitation of platforms such as Twitter (Colas, 2017;Lieberman, 2017;Venkatesh et al., 2020). In 2014, they launched an app automatically sharing pro-ISIS tweets with users, prompting Iraq's government to block Twitter (Irshaid, 2014). ISIS often uses social networks to recruit by appealing to belonging, purpose, and identity (Ponder & Matusit, 2017) and romanticizing life as an ISIS fighter (Awan, 2017).Why was social media especially effective? One contributing factor is the enablement of an unprecedented mass distribution of content (Aïmeur et al., 2023) For example, Alfifi et al.(2019) compiled a dataset of 17 million pro-ISIS tweets, with over 71 million retweets. It is difficult to imagine how a group in Syria and Iraq could reach such vast audiences before the digital era. Thus, in line with our dual-pathway model (see Figure 1), this may initiate support for extremist groups in some (i.e. discovering ISIS propaganda online) and reinforce existing sympathies in others (through greater exposure to pro-ISIS content).Repeated pro-extremist content also exploits the illusory truth effect, where repeated claims seem more accurate even if false (Fazio et al., 2015;Udry & Barber, 2024) and the high visibility of extremists can trigger a 'false consensus effect,' making individuals overestimate public support for extreme views (Wojcieszak, 2011;Luzsa & Mayr, 2021). Such tactics are desirable for terrorist organisations that are, in reality, deeply unpopular (Poushter, 2015) as social cues enhance the perceived credibility of misleading narratives (Traberg et al., 2024).Another aspect to consider is algorithms' negativity bias : Milli et al. (2025) showed that Twitter's algorithm amplifies divisive content far more than users' stated preferences (see also Rathje et al., 2024). This suits a group such as ISIS, whose propaganda was deliberately designed to shock (Venkatesh et al., 2020). As well as attracting attention, such content potentially fosters desensitisation to violence (Bushman et al., 2009;Krahé et al., 2011).Of course, social media cannot explain radicalisation alone. Individual factors such as uncertainty intolerance, perceived injustice, isolation, and a quest for significance likely play key roles (Knapton, 2014;Jasko et al., 2017;Trip et al., 2019). But social media gives extremist organisations unique opportunities to appeal to such individuals.In short, ISIS's digital strategy supports the idea that social media can play a determining role in both inculcating and reinforcing extremist positions. In the next section we argue that state actors, too, have weaponised these platforms in distinct yet similar ways. Although adversaries in international politics have always engaged in covert subversion campaigns against each other (e.g. O'Brien, 1995), social media has opened an entirely new arena for such activity. In discussing ISIS's mass proliferation of content, observers may note some parallels with Russia's 'firehose of falsehood' strategy (Paul & Matthews, 2016), most recently deployed in Ukraine (Karalis, 2024;Roozenbeek, 2024). This strategy rapidly spreads misinformation across channels to weaken trust in reliable sources. An example is the Doppelgänger campaign, where Russian operatives cloned Western news sites to spread misinformation about Ukraine (Alaphilippe et al., 2022). The effectiveness of such tactics is debated (Bail et al., 2020;Eady et al., 2023), as crafting effective propaganda is harder for Russia in the West than at home, where it controls the information sphere (Kaye, 2022). Nevertheless, high volume output from the Russian Internet Research Agency (IRA) predicted polling figures for Trump (Ruck et al., 2019). This is partially explainable by psychological research. Falsehoods often have more reach than accurate information online, partially because they tend to be more novel, polarizing, and emotionally engaging (Vosoughi et al., 2018;McLoughlin et al., 2024;Kauk et al., 2025). Individuals may then continue believing misinformation even after correction, a phenomenon known as the continued influence effect (Johnson & Seifert, 1994;Lewandowsky et al., 2012). Moreover, Russian propaganda tends to feign multiple source-origins (Paul & Matthews, 2016) to appear more convincing (Harkins & Petty, 1987). Russia exploits this principle through coordinated bots and fake accounts-uniquely enabled by social media (Geissler et al., 2023). By flooding the digital environment with misinformation, Russia exploits cognitive biases entrenching it, including black-and-white thinking (EUvsDisinfo, 2017), tactics long linked to extremism (e.g. Roberts-Ingleson & McCann, 2023;Enders et al., 2024).Russian propaganda also tends to create the impression that it comes from multiple sources (Paul & Matthews, 2016) because arguments appear more convincing when repeated by multiple sources (Harkins & Petty, 1987). Russia exploits this principle through coordinated state media, bots, and fake accounts, enabled by social media (Geissler et al., 2023). A key difference between a terrorist organisation (ISIS) and state-actor (Russia), however, is strategy.Whereas ISIS produces self-promotional propaganda, Russian operations often covertly exploit internal divisions within adversarial societies by spreading misinformation (Karlsen, 2019).Lacking legitimacy, terrorist groups may favour attention-grabbing to win support (e.g. through shock; Venkatesh et al., 2020), while state-actors can afford more subtle strategies. That both strategies flourish underscores social media's ability to enable extremist manipulation across diverse actors. Some individuals may be exposed online to Russian misinformation they may otherwise never encounter (i.e. Pathway A), given its unique prevalence online (Muhamed & Mathew, 2022), while others may strengthen existing radical beliefs through confirmation bias towards already internalised misinformation (i.e. Pathway B; Modgil et al., 2024).A clear example of Russia's attempt to stir division came during the 2016 U.S. election, when Russia's Internet Research Agency ran thousands of fake American accounts. These accounts amplified racial, anti-immigration, and conspiratorial narratives, polarising both leftand right-leaning audiences (Howard et al., 2018;Simchon et al., 2022;Vićić & Gartzke, 2024).Russian operators also organised U.S. protests on race and vaccination (Aceves, 2019;Broniatowski et al., 2018), exploiting social media anonymity to pose as in-group members-a clever tactic given in-group messages are deemed more persuasive and trustworthy (Mackie et al., 1992;Traberg & van der Linden, 2021;Im et al., 2020). Using fake accounts, they more effectively spread misinformation and fuelled polarisation, which can heighten extremism (Mølmen & Ravndal, 2023). From a social identity theory (Tajfel & Turner, 1979) perspective, heightened polarisation sharpens the psychological boundaries between in-groups and outgroups, and can increase the likelihood of violence against outgroups (Doosje et al., 2016).Russia-known for ties with far-right groups (Pantucci, 2023)-fuelling these dynamics further illustrates how social media emboldens extremism. As extremist groups and state actors weaponise social media, the rise of AI threatens to amplify these risks at a scale that was previously unachievable. For example, in addition to the ability for automated algorithms to promote divisive and extremist content (Milli et al., 2024; see also Burton, 2023), Baele et al. (2024) found that LLM-generated texts mimicking extremist groups appeared so credible that it even fooled academic experts. Extremists may also exploit chatbots. By simulating human-like conversation, AI chatbots can foster a sense of direct personal connection with users (e.g. Zimmerman et al., 2024). Since recruitment relies on trust (Saleh et al., 2021(Saleh et al., , 2023)), AI chatbots could act as scalable recruiters, tailoring narratives to users' vulnerabilities (Houser & Dong, 2025;Farber, 2025). By mimicking in-group cues (Baele et al., 2024) and appealing to identity biases (Hu et al., 2025), they could 'befriend' users and exploit principles of persuasion (Cialdini, 2008), potentially more effectively than social media due to their personal, conversational nature. It is the threat of extremist chatbots which the UK Government's review of terrorism legislation raised in 2024 (Vallance & Rahman-Jones, 2024).Lastly, LLMs can now create persuasive propaganda (Wack et al., 2025a), often as or more persuasive than human-written propaganda (Goldstein et al., 2024) which can then be micro-targeted at users. A recent experiment estimates that anywhere between roughly 2,500 and 11,000 individuals can be persuaded for every 100,000 targeted (Simchon et al., 2024), which is meaningful given that elections are often decided on small margins and these methods are already leveraged for propaganda campaigns (Wack et al., 2025b). These capabilities could strengthen both our proposed pathways of online extremism (Figure 1). They could inculcate extremist beliefs in new audiences through tailored exposure, potentially microtargeting individuals with traits linked to radicalisation (Simchon et al., 2024), and reinforce them in existing radicals though personalised persuasion validating existing beliefs (e.g., Du, 2025). Although there is considerable cause for concern regarding the ability for extremist organisations to exploit social media and emerging AI technologies to amplify the spread of harmful narratives, some respite may be found in the concurrent development of psychological interventions against such risks. One promising approach is rooted in inoculation theory (McGuire, 1964;van der Linden, 2023van der Linden, , 2024)). This 'pre-bunking' approach (Lewandowsky & van der Linden, 2021) typically forewarns individuals of potential manipulation and offers a 'refutational preemption'-i.e. exposure to a weakened version of an extremist claim alongside a clear refutation, exposing the extremist playbook (van der Linden, 2023; Roozenbeek et al., 2022). Akin to a psychological vaccine, this process builds cognitive resistance, making individuals less susceptible to similar misinformation in the future (van der Linden, 2023, 2024).For example, Saleh et al. (2021Saleh et al. ( , 2023) ) tested an inoculation game in former ISIS-held regions of Iraq, where participants role-played recruiters. Players exposed to simulated extremist recruitment tactics online later showed greater recognition of and resistance to manipulation. Similarly, Lewandowsky and Yesilada (2021) found that inoculation videos reduced belief in and sharing of both Islamist and anti-Islam disinformation, while Braddock (2019) found that inoculation reduced the credibility of left-and-right extremist groups and lowered intentions to support them. This underscores inoculation's potential, which has recently been evaluated at scale on YouTube (Roozenbeek et al., 2022).Inoculation may also potentially protect against influence operations. Ziemer et al.(2024) tested an inoculation intervention against Russian war-related misinformation online among ethnic Russians in Germany, finding that it enhanced participants' ability to detect misinformation. The ability for inoculation to work against such social identity-salient persuasion attempts remains, however, relatively understudied. Moreover, inoculation faces some challenges as a counter-extremism tool. Designed as a pre-emptive intervention (McGuire, 1964), it may be less effective once individuals are already radicalised, i.e. pathway B in our model (though "therapeutic inoculation" may help address internalised extremist narratives; Compton, 2020;van der Linden et al., 2017). Reaching vulnerable groups also remains challenging. While successful in former ISIS-held areas (Saleh et al., 2021(Saleh et al., , 2023)), such efforts are harder in regions where extremists remain in charge. Moreover, while Ziemer et al. (2024) found that inoculation reduced belief in Russian misinformation, it did not alter their attitudes toward the war, suggesting identity-salient views may be more resistant (see also Van Bavel & Pereira, 2018) .Nonetheless, evidence that inoculation counters extremist narratives warrants further research on its potential to reduce group radicalisation (Bierwiaczonek et al., 2025). Overall, while extremist ideologies are not new, we illustrate how digital platforms have transformed the landscape of extremism: non-state and state actors with distinct aims exploit algorithms to spread their narratives at unprecedented speed and scale. This may both initiate radicalization and deepen existing extremism, in mutually reinforcing pathways to support extremist violence (Figure 1). These dynamics may be magnified by AI technologies.Yet, psychological research also highlights inoculation theory as a promising intervention to build resilience against extremist manipulation (Saleh et al., 2021(Saleh et al., , 2023;;Lewandowsky & Yesilada, 2021), but needs further testing in identity-salient contexts. Going forward, psychologists, policymakers, and technology companies must work together to anticipate and mitigate the evolving threat landscape. This will require further research into the psychology of extremism in the digital age and greater investment in evidence-based interventions. For example, psychological insights could be integrated into counter-extremism strategies such as algorithmic regulation (Whittaker et al., 2021) and the UK's PREVENT programme (Montasari, 2024). Adopting inoculation as a counter-extremism strategy could help make PREVENT more preventative, as it currently still relies on individual referrals. Likewise, education curricula could build early digital and AI literacy against extremism, e.g. by incorporating interactive games into classrooms, as is already done in Finland (Kivinen, 2023). Doing so could bolster the ability of democracies to withstand the challenges of increasingly digitalized forms of extremism.
Keywords: extremism, Intergroup conflict, Social Media, Inoculation, AI
Received: 23 Sep 2025; Accepted: 30 Oct 2025.
Copyright: © 2025 Lavie-Driver and van der Linden. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Sander van der Linden, sander.vanderlinden@psychol.cam.ac.uk
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.