Sec. Language Sciences
Volume 6 - 2021 | https://doi.org/10.3389/fcomm.2021.661801
Countering the Cognitive, Linguistic, and Psychological Underpinnings Behind Susceptibility to Fake News: A Review of Current Literature With Special Focus on the Role of Age and Digital Literacy
- 1Center for Education, University Medical Center Utrecht, Utrecht, Netherlands
- 2Journal of Trial and Error, Utrecht, Netherlands
- 3Artificial Intelligence and Cognitive Neuroscience, Utrecht University, Utrecht, Netherlands
- 4ICLON, Leiden University, Leiden, Netherlands
- 5Rhetoric Department, Arts and Humanities, University College Roosevelt, Utrecht University, Utrecht, Netherlands
Fake news poses one of the greatest threats to democracy, journalism, and freedom of expression. In recent cases, fake news’ designs are to create confusion and lower trust among the general public—as seen in the 2016 United States presidential campaign and the Brexit referendum. The spread of information without formal verification increased since the introduction of social media and online news channels. After the popularization of fake news, researchers have tried to evaluate and understand the effects of false information from multiple different perspectives. However, it is evident that to tackle the problem of fake news, interdisciplinary collaboration is needed. This article evaluates the main findings of recent literature from an integrated psychological, linguistic, cognitive, and societal perspective, with a particular focus on digital and age-related aspects of fake news. From a psychosociological standpoint, the article provides a synthesized profile of the fake news believer. This profile generally denotes overconfidence in one’s ability to assess falsehoods due to a human need for causal explanations. The fake news believer can be described as well-intentioned and critical, yet driven by a basis of distrust and false foundational knowledge. Within linguistics, manual analytical tools exist to understand the persuasive tactics in fake news. The article takes analytical techniques from both the humanities and the social sciences, such as transitivity analysis, Hugh Rank’s language persuasive framework, and others that can be used to analyze the language used in the news. However, in the age of big data perhaps only computational techniques can adequately address the issue at the root. While this proves successful, there are hurdles like the ambiguity of satire and sarcasm, manual labeling of data, and the supple nature of language. Reading comprehension differences between digital versus paper reading seem inconclusive. There are, however, notable behavioral and cognitive differences in reading behavior for the digital medium such as more scanning, less sustained attention, cognitive retreat, and shallower processing. Interestingly, when metacognitive strategies were probed by, for example, having participants independently allocate reading time, a difference in comprehension scores started to emerge. Researchers have also found accounts of differences due to medium preference; and on average older people seem to prefer paper reading. Cognitive retreat, shallow processing, and overconfidence associated with digital reading and the digital medium, in general, might make readers less likely to engage in the cognitive effort fake news detection requires. Considering that there are clear cognitive differences between older generations and younger generations (in terms of decreased processing speed, metacognition, and ability to multitask) differences in how these generations process fake news is plausible. Regrettably, most current research into psychological factors influencing susceptibility to fake news does not take into account age differences. Our meta-analysis showed that 74% of behavioral studies looking at fake news largely ignore age (N = 62), even though voter turnout was far higher among older generations for both the 2016 United States presidential election and the 2016 United Kingdom European Union membership referendum. Many provisional programs set up in the past few years aimed at training digital literacy, reading comprehension, and asking critical questions as virtual skills to detect fake news. These training programs are, however, mostly aimed at younger – digitally native – groups. As a result, these efforts might not be as efficacious as intended and could be improved upon significantly. This article argues that age must become a larger focus in fake news research and efforts in educating people against fake news must expand outside of the universities and isolated areas and include older generations.
Although today’s era is referred to as the Information Age, it appears that many grow more suspicious of the “information overload” we receive. The 2010s, as many academic and non-academic book and article titles seem to suggest, is the cusp of the “post-truth” era, where truth and “alternative facts” become modeled after one’s own digital information feed. Post-Truth was even declared word of the year by Oxford Dictionaries in 2016 (BBC News, 2016a). In countries such as the United States and the United Kingdom, trust in (corporate) media institutions decreased and selective exposure, and echo chambers increased (Edelman Trust Barometer, 2021, passim; Spohr, 2017, 157).
Most notably, the international news cycle of the 2010s repeatedly covered an issue now called “Fake News”. Although this article will discuss it in more depth later, fake news denotes news stories where the facts are imbued with such bias or even consciously distorted to create disinformation for a certain political agenda. The controversy surrounding the term fake news reached a boiling point around the 2016 US presidential elections (as well as during the BREXIT referendum in the UK) as a term used by Republican nominee Donald Trump during the presidential debates to discredit claims by his Democratic opponent Hillary Clinton (BBC News, 2016b; Grinberg et al., 2019, passim). Despite having existed longer, the term became mainstream during this time (a similar development could be seen in the renewed usage of “gaslighting”, and neologisms such as “alternative facts”, “astroturfing”, and the “post-truth era”).
Worldwide, various educational institutions launched training programs and other initiatives to counter or prevent the spreading of fake news. These programs mostly base their efforts on findings in media and journalism studies. It is crucial to ask how such endeavors can be improved when a more interdisciplinary approach to the problem of fake news is taken.
This article reviews recent publications from various scientific disciplines to aid future ventures in dissuading the effects fake news has on society. More concretely, the article reviews recent literature from a cognitive perspective. It tackles the fake news conundrum from psychological, neurological, linguistic, cognitive, and metacognitive angles. Lastly, it reports the findings within these fields while also relating fake news to reading comprehension, literacy, and critical thinking skills.
First, the article synthesizes a sketch of the psychological profile of so-called fake news believers. Secondly, it discusses deeper cognition-based issues surrounding fake news in increasing complexity. Fake news spreads through spoken word and written text, so it is paramount to understand fake news from a level of both language usage and reading comprehension. The latter has several complicating factors, as reading comprehension is, today, frequently tied to digital reading practices. Digital reading, as will be discussed, differently impacts how the human brain processes information. Thirdly, the article goes into cognitive processes such as comprehension, critical thinking, information processing, and metacognition. Finally, the article examines age as a factor, either due to cognitive decline or cohort effect (cf. “digital literacy”). The factor of age is, as argued, an important concern because of various correlated aspects of age that could be important in the “believability” of fake news. The article does so through a meta-analysis of recent studies on whether or not they take age into account. Following this, it gives suggestions for educational diversification based on age groups. This article concludes with the outlines of how initiatives to counter the underlying factors that make fake news effective can be improved by drawing from the disciplines we have discussed here.
Defining “Fake News” and Its Psychological Underpinnings
The following section compares several articles to foster a functional definition of fake news. This section ends with an investigation of the psychosociological functioning of fake news. Herein factors are highlighted that are crucial to consider as the background in the sections that follow from here.
Defining Fake News
Before the article can discuss the psychological success of fake news, it is relevant to reflect on the term itself. According to Tandoc et al. (2017), fake news constitutes “fictitious accounts made to look like news reports” found online. Other researchers add that its fictions are “intentional”, with the purpose to mislead for ideological benefit (Tandoc et al., 2017, 138). Of course, as Gelfert (2018) highlights, even with “mainstream news”, the reader is required to distinguish fact from opinion. The reader reads political biases in the text and is epistemologically prodded to question the reporter’s authority, the validity of their framing, and their sources (Allcott and Gentzkow, 2017, 214; Gelfert, 2018, 87–90; Tandoc et al., 2017, 141). In this light, fake news has been around as long as news itself, and is not confined to non-authoritative reporters online (Bakir and McStay, 2017, 4). Rather than fake news being accepted passively, its authors often forge it in a way that is critical of “mainstream” news and invites the reader to question conventional narratives. However, when discussing fake news, scholars typically refer to internet-based news–even when such online stories appear in print or another offline medium (Gelfert, 2018, 96–98).
Evaluating Tandoc et al.’s and Gelfert’s delineations, fake news is the intentional creation, repetition, or presentation of deceptive disinformation which masquerades as the truth. The effect of this is to instill false beliefs with intentional misinformation behind them (Gelfert, 2018, 103–108; Tandoc et al., 2017, 140, 147); the intention to be taken seriously, and to be instrumentalized socio-politically, is what sets it apart from satire and hoaxes (Tandoc et al., 2017, 147). Furthermore, fake news lends its own credibility by framing invented or distorted news stories tangentially as related to true events and real people. Fake news links itself to real events while doing so in a politically contested (Spohr, 2017, 150–157) climate so that the reader draws those events into question. Therefore, fake news finds support by relying on the general distrust of the public and the overly critical individuals in the minority who repeat disinformation. The spreading itself is called misinformation (Bakir and McStay, 2017, 4). Researchers often typify the repetitive nature and community formation of fake news in the following terms: echo chamber, tribe-formation, or social bubble effect(s). These effects are vital for fake news’ continued proliferation, growth, longevity, and potency by relying on confirmation bias to such an “echo chamber” (Bakir and McStay, 2017, 7; Gelfert, 2018, 112–113; Spohr, 2017, 150–157). These communities are facilitated by online platforms (mostly social media websites) where these ideas fester, transform, and spread in digital social interaction (Allcott and Gentzkow, 2017, 221; Spohr, 2017, 152–153; Tandoc et al., 2017, 148–149; Torres et al., 2018, 3,983–3,984). In brief, fake news is the intentional forgery of stories related to real events and people to spread false beliefs for political purposes which happens mostly online through social media where fake news forms communities that reinforce said false beliefs. While this is how we defined fake news, the literature analyzed in this article might use slightly deviating definitions. Both in scientific literature and in popular usage, the term fake news has been used to describe the following, incompatible, occurrences: outright fabrication, recklessly unreliable reporting, slanting of facts, and honestly mistaken reporting.
The extent to which fake news influences an individual varies greatly. In its simplest form, fake news distorts tiny details of a news story to bias an individual toward a, to the author, socio-politically salient topic. However, the social implications highlighted above can be a driving factor in distorting much larger aspects of how someone perceives current events. In its most sophisticated form, fake news has enabled conspiracy theories to travel from the margin to the mainstream by questioning the trustworthiness of traditional journalism (Allcott and Gentzkow, 2017, 212–213; Bakir and McStay, 2017, 5). Communities form around false narratives and dispute certain kinds of otherwise conventionally accepted knowledge. This phenomenon is emboldened by social media as fake news spreads through social networks (Dizikes, 2018, passim; Monot and Zappe, 2020, passim; Lohr, 2018).
As this article discusses, recent efforts to thwart fake news beliefs have mostly focused on fact-checking and developing programs for young(er) people. Yet, if the concerns aim at preventing disinformation on all levels—political, economic, educational, medical, and entertainment - It is paramount to expand the scope of such programs. A larger, interdisciplinary scope is necessary because conspiratorial thinking can feed into many psychosocial human needs and abilities (Spohr, 2017, 151; Tandoc et al., 2017, 137–138) and is compounded by age.
A Psychological Sketch of the Fake News Believer
Many recent studies discuss the psychological makeup of the believer in fake news. The lack of analytical thinking (and reading) skills, general skepticism, and a lack of reflexive open-mindedness are vital to the psychological makeup of such a person (Pennycook and Rand, 2019a, 30). As for reading, it appears that people who believe “pseudo-profound sentences” (such as motivational phrases) are also more likely to believe fake news is accurate. Moreover, they are more likely to share such stories. This appears to be related to one’s own overconfidence in knowledge (Pennycook and Rand, 2019a, 33–35). Yet, people can detect fake news beyond their ability or “willingness to think analytically” (Pennycook and Rand, 2019a, 32). The factors listed above indicate that a wide range of people can fall victim to believing fake news despite different but related cognitive abilities–perhaps owing to similar ideological beliefs.
As mentioned above, the reader and believer of fake news is not a passive receiver, but often a complicit agent in their spread of disinformation. The problem is not that believers of fake news are uncritical, but frequently are critical to the point of losing sight of conventional truths based on scientific findings. Examples of these would be the 9/11 Truther myth, Antivax, Anti-GMO, Fake Moon Landing, Flat Earth, and Climate Change Denial movements, Russiagate (and other conspiracy theories) (Gelfert, 2018, 98, 105; Tandoc et al., 2017, 139). The connection between susceptibility to fake news and general intelligence remains unclear. For some issues, those who believe in particular fake news are more likely to have above-average educational levels, which underlies their “self-investigative” attitude and “citizen journalism” attitudes in the first place. Alternatively, previous research also suggests that lower cognitive abilities (reasoning, remembering, understanding, ideational flexibility, and problem-solving) are significant factors that can foster belief in fake news (De Keersmaecker and Roets, 2017, 107–109). However, regardless of cognitive ability, believers in fake news hold the same “magnitude of the illusory truth effect” (Pennycook et al., 2018, 5). As hinted in the latter part of our definition, fake news is a co-production by the initiator and its receptive audience (Tandoc et al., 2017, 148).
Joseph Forgas and Roy Baumeister (2019, 2–5) suggest that human gullibility is a decisive factor. They explain belief in fake news as a general failure of social intelligence due to ignorance and credulity which leads one to more easily believe a misleading or unproven claim that is counterfactual despite considerable evidence to the contrary. The authors consider fake news’s success to rely on a function of the human need for information, and our general evolutionary trust for individuals over faceless institutions. As such, humans do not see the world as is, but as it appears to them based on information from others to construct meaning. This goes in tandem with the current scientific consensus that human rationalism is “bounded” (Forgas and Baumeister, 2019, 7–8); humans are prone to irrationality, which is paradoxically enforced by the heuristic human need for causal explanations and pattern-seeking behaviors. These needs can lead to an individual connecting patterns that are correlative or non-existent (e.g., pareidolia) in helping to find meaning in the world. To question information, one must first internalize said information and analytically challenge it. This requires effort against the salience of oft-repeated falsehoods resulting in an extra step which in turn becomes mentally taxing and is therefore often skipped (Forgas and Baumeister, 2019, 9–10). Propaganda and fake news (sometimes called peer-to-peer propaganda) directly play into this gullibility by presenting falsehoods as truths often in relation to partially true representations of events and people’s actions. The people who believe in fake news do so, as suggested, out of evolutionary drives, including overconfidence in one’s own competencies, knowledge, and accepted beliefs (the Dunning–Kruger effect) (Forgas and Baumeister, 2019, 10; Rapp and Salovich 2018, 235).
Falsehoods prove hard to correct through evolutionary thought processes. Fallacious thinking is self-perpetuating David Rapp and Nikita Salovich (2018, 232–235) highlight. The believers in fake news are well-intentioned and critical but operate on a baseline of distrust and false foundational knowledge. Rapp and Salovich, therefore, use the term “confusion” to explain the psychological efficacy of fake news. As stated, human perception of the world is socially constructed. When a reader sees a claim that runs counter to the conventional narrative, it makes the reader slow down, as the new information confuses. The more someone reads incorrect information, the more they doubt the conventional narrative. This causes doubt and impairs a person’s decision-making–causing ambiguity which humans find difficult to deal with in general. The original narrative is psychologically tagged as suspicious in subsequent readings by the doubt-inducing information. While readers typically dispel inaccurate information based on their overconfidence, disinformation impacts this confidence and makes one more inclined to second thoughts when exposed again. The switch-over to believing the false narrative occurs when such a narrative seems more familiar to us. This belief intensifies when that narrative’s contents are easily digestible. Additionally, if one finds it hard to criticize or question a false narrative due to unfamiliarity with its lies, a false narrative will become more potent. Lastly, with continued exposure to false narratives, these assist in forming biases against conventional truths.
Moreover, recent developments in “traditional” news aid the salience of fake news. The aforementioned growing distrust of news outlets undermines general trust in the Fourth Estate (Rapp and Salovich, 2018, 235). Naturally, fake news does not have this issue, as its alternative facts repeatedly lead to the same conclusion. Furthermore, fake news is more emotionally and politically charged–often challenging the establishment–which further increases its salience and audience engagement (Bakir and McStay, 2017, 1, 7; Pennycook and Rand, 2019a, 39). Fake news benefits from today’s incredible speed of large amounts of information brought to us each day. Moreover, news outlets often have trouble catching up with correcting their errors, leading to the spread of both contradictory information and the subsequent dismissal of news outlets outright (Pennycook and Rand, 2019a, 48). The individuals most susceptible to become critical of conventional narratives are those most overconfident in their ability to assess falsehoods. This explains how an individual can come to underline their fake news-related beliefs, and why they will repeat them. The fake narrative infiltrates their worldview and they come to strongly rely upon said beliefs and may become activated by them. This is further emboldened if the information sounds intuitively correct and if the reader has “faith in [their own] intuition” (Pennycook and Rand, 2019b, 193–194, 196) over, for instance, a self-critical reflection of how they judge something to be true (cf. ‘overconfidence’). This self-affirming intuition, in turn, makes it harder to correct misconceptions that develop from ideas spread by fake news (ibidem). Moreover, if these ideas are echoed back to them in their social circle with confidence, it makes them more liable to accept such claims (Rapp and Salovich, 2018, 235–236). When others repeat these falsehoods, the falsehoods are more quickly processed by the reader, to the point where a reader will believe the claim even if forgetting the original story (Pennycook et al., 2018, 4; De Keersmaecker and Roets, 2017, 109). Finally, the information becomes so ingrained, that even reminding them that the information they read is either wrong or that experts have disputed the claims, does “not undermine or even interrupt the effect of repetition” (Pennycook et al., 2018, 5).
Succinctly, socio-political circumstances should not be overlooked. For one, societal polarization plays a key factor, as showcased by Pennycook and Rand (2019a) as well as Andrew Guess et al. (2019a, 1). Contested political situations can persuade critical and analytical thinkers to become fervent in their political beliefs to the point where they take more extreme positions in line with their ideological identity. The consequence is that if fake news presents a political position, a polarized person might just fall in line in spite of their capacity to think analytically (Pennycook and Rand, 2019a, 40). Contrastingly, Bronstein et al. stress that delusion-prone and dogmatic people as well as religious fundamentalists are more likely to believe fake news because of less open-mindedness and analytical thinking in general (Bronstein et al., 2019, 115). Nevertheless, the exact area of overlap of believing implausible claims here, between analytical and “less analytical” individuals, has not yet been researched. Notedly, is the overall skepticism of a person who falls prey to fake news. Guess et al. further the notion that not all fake news consumption is equal. For instance, those who regularly visited fake news websites only amounted to 10% of the American public in the last month of the presidential elections in 2016 while its effect was more substantial. These particular individuals are categorized as conservative-leaning (though not because of party loyalism, see Pennycook and Rand, 2019b, 196) further typified by Pennycook as being less likely to be open-minded or more ideologically rigid. Strikingly, being more politically informed does not change one’s susceptibility to fake news (Guess et al., 2019a, 11; Pennycook and Rand, 2019a, 5, 30; De Keersmaecker and Roets 2017, 109). There is also an important correlation between political conservatism and age–itself impacted by digital literacy.
Due to the focus of this article, this article cannot substantively link this correlation to the article’s meta-analysis on age. Nonetheless, the conclusions provided do present themselves as a stepping stone for future researchers to further examine the connections between these factors discerned here from the currently unintegrated body of literature. In fact, as the article later establishes, age is a largely underappreciated factor in the current studies which demands addressal (see “Considering Age” and Supplementary Material).
Because fake news is a language-based phenomenon, the article will now investigate fake news linguistically and then investigate it from a reading comprehension perspective.
Fake News and Language
This section provides an overview of both the older, manual methods of biased news detection and the more recently developed machine methods of fake news detection. As will become clear, despite significant advances in machine learning tools, there is still a long way to go in the study of the role that language plays and can play in fake news identification. Additionally, new challenges exist due to the shift from qualitative hands-on analytic procedures to quantitative algorithmic analytic processes.
Detecting Biased News Reporting Through the Manual Analysis of Language
The strategic choice of language use and the strategic positioning of words or sentences within a given linguistic context falls firmly within the domain of classical rhetoric. In linguistics, these topics have found a home in stylistic scholarship, which draws on the linguistic cornerstones of phonetics, morphology, and syntax on the one hand and the many social approaches to language use on the other. Areas closely related to stylistic scholarship are sociolinguistics, historical linguistics, corpus linguistics, pragmatics, and discourse analysis. A further offshoot of this last area is critical discourse analysis (or critical linguistics/stylistics), which started in earnest some thirty years ago in a largely non-internet age, and thus in the pre-social media era. In this period, linguists, mostly positioned at the liberal left, often analyzed the discourse that appeared in right-leaning and centrist newspapers for language bias and power in reported news events (e.g., Fairclough, 1989, passim; Fairclough, 1995; Van Dijk, 1987, passim; Wodak, 1987, passim). In this age of eyes-on-the-page analysis, these scholars would use linguistic frameworks to illustrate how such things as word choice, word placement, and sentence structuring were used to re-frame certain interpretations of an otherwise relatively neutral news event. The effects of such biased discourse could have the effect of repositioning readers and hearers to process and understand a particular event in a “skewed” ideological light.
Several linguistic approaches and tools lent themselves well for such analysis. One of these was systemic functional grammar, a social semiotic approach to language that was developed by the linguist Michael Halliday (1978), (passim). Halliday claimed that language is a system of choices and that these choices have both ideological and socio-cultural functions. The focus on systemic functional grammar is on the two concepts of textual cohesion and transitivity, the latter of which is of most significance here. Transitivity analysis focuses on how processes are communicated and in particular how the choice of the main verb in a sentence or clause can alter or realign the ways in which a recipient of that text or utterance perceives the discourse message. Another persuasive tool for reader realignment is the concept of modality, which involves the use of modal auxiliary verbs, sentence adverbs, evaluative adjectives, and adverbs, etc. The strength of modality, as a powerful linguistic tool for influencing readers, is that it communicates attitudes through language choices and in doing so can readjust the perspective or point of view of a reader or listener (Fowler, 1986, passim).
A related linguistic phenomenon that also lends itself easily for persuasive purposes is deixis (a Greek term meaning ‘pointing’). Deixis refers to a number of words that are context-dependent, and especially in the sense of spatial-temporal context. Typical deictic categories include pronouns, especially demonstrative pronouns and adverbs. Deictic choices always involve a demarcation of boundaries that affects how readers and hearers receive and understand a text. Deixis has also played an important persuasive role in political discourse contexts (see Pennycook, 1994, passim; Van Dijk, 2002, pasism; Chilton, 2004, passim; Mulderigg, 2012, passim).
Similar approaches to the manual analysis of language use in texts also exist in the social sciences, especially in the domains of communication science and social psychology. Much of what the article discussed above falls within Robert M. Entman’s understanding of the concept of “framing” from communication science, namely, the idea that producers of texts in ideological settings can select certain aspects of a perceived reality and make them more salient for a reader/hearer. In doing so, they can promote a particular perspective on an issue in favor of possible other perspectives (Entman, 1989, passim; Entman, 1991, passim; Entman, 1993, passim). Similar influence tools can also be observed in Hugh Rank’s language persuasive framework of “intensify” and “downplay”, which he originally designed to analyze advertisements, but which is also used extensively in political and newspaper discourse. Tactics of intensifying, namely, that which is perceived as ‘positive/beneficial’, include (i) repetition, (ii) association, and (iii) composition. In the category of ‘association’, for example, words representing positive experiences are used to link readers/hearers (i.e. potential customers) to products that they might desire to purchase either now or in the near future. These are words that encourage readers/hearers to accept an association without critical analysis or without processing messages ‘centrally’, as described in the elaboration likelihood model (Petty and Cacioppo, 1986, passim).
Likewise, tactics of downplaying, namely, that which is perceived as ‘negative/detrimental’, include (i) omission, (ii) diversion, and (iii) confusion. This last category of lexical confusion includes the purposeful application of evasive and distal linguistic devices such as jargon, euphemisms, and gobbledygook. There is also the related notion of “god” and “devil” terms (Burke 1969a, passim; Burke 1969b, passim). These are language items that are constrained by culture and time. So-called “god” words in Western society today could be things like “democracy” and “freedom”, while so-called “devil” terms could be things like “slavery” and “totalitarianism” and even “fake news”. In the West today, for example, even though the term “fake news” is used by the left against the right-wing news media and by the right against the left-wing news media both use it in the same “negative/devil term” sense.
These then are some examples of manual analysis frameworks of persuasive discourse in the pre-social media and a pre-machine age of analysis. All these approaches, from both the humanities and the social sciences, are essentially re-construals of the core analytic precepts and principles of classical rhetoric, namely, the persuasive tools of logos, pathos, and ethos. These days it is not the analysis of left-wing or right-wing language bias in newspapers articles–written by journalists and subsequently shaped by editors and sub-editors–that linguists in this field are mostly interested in; rather, it is the meta-level stylometric analysis of thousands of articles and news sources that have come from either unknown sources or false sources. Here, in this big data analysis it is not necessarily that events in a story have been given an ideological linguistic slant, rather the entire news story–or a significant part of it–has been made up. In short, this is not ideologically distorted news that is being analyzed but pure fake news.
Many fake news stories that end up in mainstream news outlets have started life on social media. The advent of the internet, and the publishing of “news” that is shared on social networks platforms such as Facebook, Instagram, and Twitter, has brought new challenges to the rhetorical discourse scholars and scientists that, arguably, only the analysis of big data, via computational techniques, can adequately address.
Detecting Fake News Through the Machine Analysis of Language
While there are some tentative, limited ways we can use digital/machine methods to identify fake news linguistically, there are still some hurdles that need to be overcome (see e.g., Oshikawa et al., 2020, passim; Schuster et al., 2020, passim).
A fitting place to start this account of the machine analysis of language is with the sub-discipline of stylometry. This approach has often been used in past scholarship in an attempt to attribute authorship to texts that are either disputed or whose author is unknown. It also has uses in the courtroom where it takes on the guize of forensic linguistics. It has a long, manual, history of usage but is these days used in conjunction with natural language processing, machine learning, and artificial intelligence. Natural language processing has advanced capacities and can search for a wide range of linguistic features including syntax structure, lexis (n-grams), punctuation, general readability, etc. At its core, it is about training algorithms to identify and categorize items and then make appropriation decisions.
There have been several relevant studies conducted in recent years in this area of what might be seen as the intersection of linguistics, stylistics, rhetoric, computer science, machine learning, natural language processing, and artificial intelligence. Many studies show promising results, but they also highlight the many complications and impediments that exist. One challenge pertains to the quality of sizable fake news data sets used for analysis. A recent study conducted by Torabi Asr and Taboada (2019), (passim) reviewed a number of fake news datasets and found them to be largely incomplete and unbalanced concerning topics and genres. The authors also highlight the real challenge facing researchers active in this field, namely, the ability to collect actual fake news data to be used in a database. A key reason for this difficulty is that articles need to be analyzed and labeled manually by linguistic and stylistic experts, which is a very time-consuming activity.
A second challenge, related to the quality of the fake news databases, is where does one draw the line between the problematic nature of fake news, propaganda, and hoaxes on the one hand and legitimate, and even desirable, category of political satire on the other, which may include the rhetorical humor tools of sarcasm and irony. In one study, Rashkin et al. (2017, passim) conducted a survey on the language of news media in the context of political fact-checking and fake news detection. They compared the language used in genuine news stories to the language found in satires, propaganda. and hoaxes in order to try and discover some stylistic and linguistic characteristics inherent to an unreliable text. They found that although there are still many questions around the process and methods of media-fact-checking, stylistic cues do indeed appear to help in determining whether or not a text is trustworthy. They surveyed the fifty highest weighted n-gram features in the “Maximum Entropy Classifier” program for each class, namely, (i) trusted news, (ii) satire, (iii) hoaxes, and (iv) propaganda. The highest weighted n-grams for the category of trusted news were often specific place names or times (e.g. cities and days of the week). The highest for the satire category were typical gossip adverbs such as “reportedly” and “allegedly”. Hoax articles tended to have heavily weighted items pertaining to divisive topics, two examples from the corpus were “Trump” and “Liberals”. Hoax articles also tend to employ so-called dramatic journalistic cues such as ‘breaking’. Heavily weighted n-gram features in the category of propaganda texts include abstract generalities and specific issues. Examples that the authors give of the former include words like “freedom” and “truth” and instances that are given of the latter include “vaccines” and “Syria”. The observation is also made that the terms “YouTube” and “video” are highly weighted for the propaganda and hoax categories respectively. This, the authors of the study conclude, may indicate that they often rely on video clips as sources (Rashkin et al., 2017, 2,934).
Not only satire but also sarcasm and irony remain grey areas of language and ones that are difficult to detect with any certainty using machine learning and algorithms. A recent qualitative analysis of sarcasm, irony, and related hashtags on Twitter conducted by Sykora et al. (2020), (passim) found that many past studies conducted on the machine detection of sarcasm and irony had failed, owing to a lack of appreciation of the quality of the actual linguistic data. To address this, the researchers ran their own experiment on more than 4,000 Twitter messages during which they performed a manual semantic annotation procedure. They also took the contextualized humoristic use of multi-word hashtags into consideration, something that a sentiment analysis tool used in big data analysis would not pick up on. Using a qualitative approach they discovered that only 15% of the Tweets previously labeled as sarcastic in machine reading procedures were actually sarcastic. They concluded their study with a call for better procedures in data preparation when interpreting the outcomes of such sentiment analysis.
A core challenge observed in all of the abovementioned computational linguistic studies is the need for high-quality, manual semantic analysis. Such annotation needs to be conducted by trained linguists, stylisticians, and rhetoricians in order to control for the patterns of grammar, logic, and rhetoric of a text. Moreover, it is possible to go beyond language and say that it is arguably not enough to only look at the language and genres of fake news in isolation. Context features have also proven valuable for machine analysis and are often used in conjunction with linguistic features. Examples of contextual features are the characteristics of content creators such as the amounts of posts or age of the account but also the source of a story, the platform on which it is being hosted, the number of times it has been shared, the comments that accompany it, whether or not the piece seeks to discredit another (more established) news source, etc. Accounts that create false information are often only registered recently and do not possess the skill to write well-written articles. These types of accounts are also referred to as “throw-away” accounts. Another important context feature is how accounts form a network together. Networks that spread fake news often form very tight clusters of (fake) accounts that have a lot of overlap between their followers and followees and share each other’s content. Bots are often at the center of such networks. Time-based context features, such as how many likes a social media post gets within a certain time frame, are also valuable data to aid machine analysis in its quest to automate fake news detection (Kumar and Shah, 2018, 17–19; Bondielli and Marcelloni, 2019, 46–47) Some have even exclusively used context features such as user spreading news behavior and news propagation features to achieve competitive results (Zhang and Zadorozhny, 2020, passim). The extended context is a crucial part of the picture. Combined with the linguistic evidence that has been arrived at by both hands-on linguistic qualitative annotation/analysis, as well as by meta-level machine analysis of large amounts of data, it could provide a powerful tool in the drive against limiting, and perhaps even halting, the spread of fake news.
A further complication for stylometric and computational linguistic research is that language is supple. It has a form, but that form may have several semantic functions–and these might even be dependent on the co-text and/or context. This means that a specific form or trait can never be fully identified at the exclusion of other possibilities. Words are not like numbers or like binary zeros and ones. They are far more rhetorical in their flexibility and far less logical in their predictability.
When trying to detect fake news by looking at language, there are a number of linguistic “flags” that could indicate that a news item is fabricated. This flag system, however, is no guarantee. In a recent presentation of their research at the 13th International conference on semantic computing, Traylor et al. (2019), (passim) showed how fake news stories on social media platforms share key linguistic characteristics. These include making disproportionate use of unsupported hyperbole and non-attributed quotations. Similarly, in a study by O’Brien et al. (2018), (passim), and in line with hyperbole and other similar rhetorical tools, “signatures of exaggeration” were also identified as possible markers of a fake news text.
In the earlier-mentioned study by Rashkin et al. (2017, passim) that looked at four types of news discourses, including hoaxes and propaganda, the researchers introduced the hypothesis that fake news stories will attempt to enliven/invigorate the language that they use in order to attract readers. To test this, they drew up five lists of words from Wiktionary that imply some degree of exaggeration/dramatization. These were comparatives, superlatives, action verbs, manner adverbs, and modal adverbs. They then analyzed the data for their occurrence. They made a number of observations, including that words that can be used to exaggerate, i.e. subjective adjectives (e.g. “brilliant” and “terrible”) and superlatives (e.g. “worst” and “most”) all occurred more frequently in fake news articles than in real/reliable news. The same was true for modal adverbs (e.g. “totally”, “definitely”, and “absolutely”). Comparatives were more often found in trusted news sources, as were figures, numbers, and references to money. Direct/assertive words were also found to be more present in genuine news sources. The researchers also observed in their data that less-reliable and deceptive types of news discourses tended to use a number of other distinct linguistic features. These include first-person and second-person pronouns (i.e. “you”) and hedges. However, the study also found nuance across the four discourse types: trusted news, satire, hoaxes, and propaganda. For example, the two types of untrustworthy news (hoaxes and propaganda) tended to use more adverbs, than trusted news, but, complicating the matter, so did satire. Hoaxes tended to use fewer superlatives than propaganda and propaganda tended to use more assertive verbs (Rashkin et al., 2017, 2,932–2,934).
In addition to being subjective in nature, fake news also tends to be sensationalist. It likewise frequently concerns contentious and provocative topics. Its gossip-like quality also lends itself to being shared more readily across social network platforms and this sometimes ends up being reported in and by mainstream media outlets. Factually correct news is more likely to be communicated in a relatively objective and non-sensational manner and tone. It is also less likely to be shared, given its non-gossip-like, perhaps even tedious, quality (Vosoughi et al., 2018, passim).
In the work of Vosoughi et al. (2018, passim) cited above, it was investigated that true and false rumors on Twitter trigger different emotions, e.g. fear, disgust, and surprise in case of false rumors. Emotions play a key role in the consumption of fake news. This has not been adequately touched on here thus far. Some recent works that highlight the importance of emotion and language include studies conducted by Giachanou et al. (2019). In their study of emotional signals in fake news detection, Giachanou et al. (2020) proposed a long short-term memory (LSTM) model that incorporates emotional signals removed from the text of the claims to distinguish between trustworthy and untrustworthy ones. Their experiments on real-world datasets showed the significance of emotional signals for trustworthiness valuation. Building on that in their 2020 study, Giachanou et al. showed in their experiments that leveraging linguistic patterns and personality traits can enhance the performance in differentiating between so-called fake news “checkers” and fake news “spreaders”. Lastly, in their very recent comparative study Giachanou et al. (2021), (passim) employed psycho-linguistic characteristics to detect conspiracy propagators. Using the ConspiDetector model their results showed that detecting performance with regard to conspiracy propagators can be improved. Ghanem et al. (2020), (passim) did similar work on emotion and language in fake news by conducting an emotional analysis of false information in social media and news articles. In this comparative study, their experiments showed that false information has different emotional patterns in each of its types, and, as such, emotions must play a key role in deceiving the reader. Based on their results, they propose an LSTM neural-network model that is emotionally infused to detect false and fake news. A very recent follow-up study conducted by Ghanem et al. (2021), (passim) was on fake news detection by modeling the flow of affective information. The researchers first modeled the flow of affective information in fake news articles using a neural architecture. The model “FakeFlow” combines topic information and affective information extracted from text. Conducting experiments on real-world datasets the researchers found that FakeFlow achieves better results when compared against other methods. Their outcomes highlight the importance of capturing the “flow” of the affective information in news articles.
In summary of this account of fake news and language usage, although the signs for researchers are promising–with regard to using language cues in machine learning and rich data analysis as a means to detect fake news–much important and difficult work still lies ahead. What has become clear is that interdisciplinary collaboration will be very much needed if meaningful headway is to be made.
Reading Comprehension and Fake News
Having discussed the linguistic components of fake news, the following section examines the related issue of reading comprehension in the context of how human brains process fake news. The section looks at some of the compounding factors of digital reading comprehension and further considers the underexplored factor of the reader’s age herein.
To understand how fake news might influence the reader, the processing of information through reading is itself a relevant avenue of analysis. More specifically, how a person processes text, understands its meaning, and integrates the read information with what the reader already knows (Davis, 1944, passim) is logically connected to the viability of fake news.
Fake news in its current form occurs mostly “on-screen” and it is, therefore, important to consider its interplay with reading comprehension via the digital medium. For instance, online readers generally show more scanning, skimming, and keyword spotting. Additionally, online reading is associated with less sustained attention, less time spent on in-depth reading, one-time reading tactics, and non-linear reading approaches (Liu, 2005, passim; Duggan and Payne, 2011, passim). Despite these behavioral and cognitive differences, literature on the relationship between reading comprehension and medium choice remains rather inconclusive. Some students report better reading comprehension from reading printed text than from a digital device (Mangen et al., 2013, passim; Ben-Yehudah and Eshet-Alkalai, 2014, passim; Daniel and Woody, 2013, passim). These results are often explained by potential technological disadvantages associated with electronic devices such as screen glare, visual fatigue, and less-than-convenient navigation (Benedetto et al., 2013, passim; Moustafa, 2016, passim; see Leeson, 2006, passim, for a review).
However, newly accumulating evidence criticizes these explanations as insufficient (Antón et al., 2013, passim; Daniel and Woody, 2013, passim; Lin et al., 2015, passim; see Gu et al., 2015, passim, for a review). Some research has found better comprehension scores for the digital medium (Kerr and Symons, 2006, passim; Verdi et al., 2014, passim) while yet others report no difference at all (Margolin et al., 2013, passim; Green et al., 2010, passim; Holzinger et al., 2011, passim; Rockinson-Szapkiw et al., 2013, passim; Porion et al., 2016, passim; Schugar et al., 2011, passim).
Moreover, the shortcomings of digital reading are amendable by adapting (and teaching) new reading strategies that digital reading allows for and which are successfully used by fact-checkers. As pointed out by Sam Wineburg and McGrew (2017), (1, 23, 28, 38), fact-checkers have developed special techniques to quickly and more accurately determine the validity of a text through “lateral reading” than traditional scholars. As Wineburg and McGrew explain, traditional reading happens “vertically” (i.e., the focused reading of one text from top to bottom, page after page). As a result, “vertical” reading means that the reader assesses the validity within the text itself through evaluating its visual, linguistic, and factual aspects (as highlighted before by linguists and machine learning methods) [see Wineberg and McGrew (2017), 38. The problem with “vertical reading” is if fake news superficially incorporates expectations of the reader for academic texts that this can bias even trained academics to assume the contents are correct. For example, such a fake news text is originated by a credible-sounding institution; it is from a “.org” domain; it has an abstract; wields an academic tone; lacks sensationalist language; and contains no broken links; etc.] (Wineberg and McGrew, 2017, 40–41). “Lateral reading”, however, utilizes new opportunities of the digital medium by comparing this main text with other sources opened in additional tabs laterally to this main text to check its validity from without. The techniques are specific to how fact-checkers scour the internet for information that discredits the origin of the source without addressing specific claims, linguistic, or visual presentation first. Thus, as Wineburg and McGrew note, “lateral reading is not reading” (Wineberg and McGrew, 2017, 38)—at least in a narratival or linear sense. “Lateral reading” hence falls outside of the purview of “reading comprehension” as such and the current scope of research in this area. Then again fact-checking through “vertical reading” is also not necessarily about reading but rather critical textual analysis (internal source criticism) of particular elements. Wineburg and McGrew thus stress that we should treat the web like “a web” to overcome fake news that originates from obscure sources (Wineberg and McGrew, 2017, 46). As such, future research might reconsider how “reading” and “fact-checking” can be fused to harness the possibilities offered by the digital medium against disinformation found online.
While there does not seem to be a consensus on whether reading comprehension is affected by the reading medium, critics of the previously mentioned studies point at the rigid experimental setting and lack of awareness of biases. For one, the studies typically involve college students who are highly familiar with the setting of reading a text and answering questions within a certain timeframe (Baron, 2017, passim). Interestingly, when Ackerman and Goldsmith (2011, passim) gave participants the freedom to, for themselves, allocate the amount of time they spent on reading, less time was allocated by the group reading digitally. This group also had lower comprehension scores of the reading. In addition to measuring reading comprehension, they also measured metacognitive monitoring and control processes. They found that calibration bias (a measure of overconfidence or underconfidence) was higher for the digital readers than for the paper readers. The digital group consistently overestimated their own knowledge of the reading material. Linking this to what was mentioned before, the psychological makeup of a person susceptible to fake news is, at least partly, related to one’s overconfidence in knowledge (Pennycook and Rand, 2019a, 33–35). These results suggest that people reading from a digital medium might be particularly susceptible to fake news as they experience higher levels of confidence about their own knowledge and possibly their ability to detect falsehoods.
In further research, lower comprehension scores and altered metacognition were again found for some reading conditions but not others (Ackerman and Goldsmith, 2011, passim; Ackerman and Lauterman, 2012, passim; Lauterman and Ackerman, 2014, passim). They concluded that there are cognitive and metacognitive influences (particularly overconfidence) that, depending on the reading conditions, translate into noticeable inferiority for “on-screen” reading when it comes to reading comprehension. They too reported that peoples’ medium preference affects their metacognitive processes when learning from texts. This is especially relevant considering that older generations prefer reading from paper (Kretzschmar et al., 2013, passim). What recent research indicates is that younger (student) readers appear to do most of their literary reading on digital devices and, as such, can be seen as “hybrid” readers who especially read fiction on their laptops and other mobile devices (Burke and Bon, 2018, passim). It also appears that the mobile digital literary reading experience is embedded in the immediate environment and in the broader situational context of the reading event itself (Kuzmičová et al., 2018, passim). These last two points are themselves linked to and compounded by the cohort effect whereby older (pre-social media) generations are less tech-savvy than younger generations—the article returns to this point in “Considering Age”.
Compounding Factors of Digital Reading Comprehension
One potential cognitive explanation for a decreased performance when reading digitally is that interacting with digital media elicits a shallower form of processing on a wide range of tasks (Kaufman and Flanagan, 2013, passim; Daniel and Woody, 2013, passim; Mueller and Oppenheimer, 2014, passim; Lauterman and Ackerman, 2014, passim). Possible reasons for these results stem from the different types of interactions that are associated with digital media. Typically, a screen involves the brief reading of emails, social networking, and forums. This type of activity promotes the behavioral differences observed for online reading (e.g., scanning). Researchers also found that digital reading hinders deep immersion in the text itself (Mangen and Kuiken, 2014, passim). Kaufman and Flanagan (2013, passim) argued that due to the increasing demands of multitasking, information overload, and expectation of immediate gratification associated with the digital medium, participants “retreat” to a less cognitively demanding way of thinking (Sparrow et al., 2011, passim). Interestingly, multitasking capabilities have been observed to be higher in younger generations compared to older generations. This discrepancy could be explained by multitasking simply being a cognitively more demanding task (Carrier et al., 2009, passim). The problem with shallow processing and cognitive retreat are that readers using digital devices may find it difficult to engage in tasks such as analytically challenging the information they read; this task is mentally taxing and thus often skipped (Forgas and Baumeister, 2019, passim).
The International Federation of Library Associations (IFLA) gives guidelines to help individuals spot fake news. They recommend asking critical questions while reading, like many educational initiatives (Musgrove et al., 2018, passim; O’Connor et al., 2010, passim). The evaluation of media in this way is cognitively demanding and requires critical thinking. With the increasing evidence of digital readers engaging in shallow processing and retreating to less cognitively demanding ways of thinking, chances are that many digital readers never arrive at asking themselves the critical questions required to appropriately handle fake news. This is likely compounded by the “echo chamber” effect mentioned earlier, as current algorithms and the “infinite scroll” of current social media platforms continuously and repeatedly “feed” similar content to viewer profiles based on their reading habits.
At this point, this article established that there are several reasons why it is important to consider a person's age as a factor into susceptibility to believing fake news. Additionally, age-associated cognitive decline compounds these factors. Such cognitive decline is a normal, non-pathological form of cognitive aging that sets in from middle age (30s) onwards (Deary et al., 2009, 137–138), defined by a slow decline in processing speed, reasoning, memory, and executive functions when becoming older. Moreover, differences in metacognition are also observed for older people, as pointed out in a study by Palmer, David, and Fleming which “marked decrease in perceptual metacognitive efficiency with age” (Palmer et al., 2014, 151). Furthermore, there are also differences observed between age cohorts in creativity. Creativity is, in turn, believed to be fundamentally linked to metacognition, although some controversy remains (Corgnet et al., 2016, passim; Jia et al., 2019, 6–8).
The cognition of an older person differs from that of a younger person. This fact has, however, largely been left unconsidered in current fake news research. Our meta-analysis (see Supplementary Material) shows that 74% of behavioral fake news studies ignore age and do not make a distinction between age groups (N = 62).
It is important to note then that, despite having a preference for paper reading, older generations increasingly participate on online social media platforms (Hunsaker and Hargittai, 2018, passim) and are, as a result, just as likely to be exposed to fake news. This is especially concerning given the compounding impact of the aforementioned factors that lead to fake news susceptibility as a result of age (both in cognitive and technological ability as part of the cohort effect). As Andrew Guess et al. point out, for instance, Facebook’s demographics are increasingly older in age and among this group, those aged over 65 “shared nearly seven times as many articles from fake news domains as the youngest age group” (Guess et al., 2019a, 1).
While the current literature does not seem to deem age an important factor, age does correlate with political involvement. For instance, voter turnout was far higher among older generations for both the 2016 United States presidential election and the 2016 United Kingdom European Union membership referendum (NatCen Social Research, 2017; Pew Research Center, 2018), which are believed to be heavily influenced by fake news (Guess et al., 2019b, passim). More generally, older generations typically hold much more power in elections due to voting habits compared to younger people, yet the effects of fake news on the younger group are overstudied while the older generations are barely considered. As a result, preventative initiatives to educate people about fake news are also mostly happening in universities and schools, which, depending on the culture, are thus almost solely aimed at young people rather than older generations.
Patently, while age-related cognitive decline is worthwhile to consider it should not be overstated or overgeneralized. While younger people undergo extensive synaptic pruning in young adulthood, older human brains can continue to develop new pathways and new cognitive structures when trained and honed in specific mental exercises (Costandi, 2016, 42). After all, learning and memory exercises are the key drivers of new nerve tissue development (Helmstetter, 2014, passim). Having said that, educational initiatives to counter fake news susceptibility for older generations become, in fact, more pertinent and highlight the importance of lifelong learning for current and future older people in the Information Age.
With factors such as cognitive decline, altered metacognition, a preference for paper reading, and lower multitasking capabilities—while speculative—it is entirely plausible that the current older generations are affected by fake news in a different way than younger generations are today. In short, research into and interventions to combat fake news should, therefore, at the very least, start considering age.
Discussion and Conclusion
While interventions against fake news are well underway, now reconsidering age as a factor in fake news susceptibility, the current approaches might miss out on finding and persuading an older audience. Even already existing lifelong learning initiatives suffer from inaccessibility issues. For instance, currently popular lifelong learning resources are typically so-called “massive online open courses” (MOOCs). MOOCs allow people to follow online courses at their own pace and in their own time, while usually also receiving feedback from assessors. Several MOOCs on fake news are presently online for free. However, our proposed target group, consisting of the cohort of generally less digitally savvy older people, are less likely to navigate the digital infrastructure surrounding MOOCs, let alone be made aware of them in the first place. This point becomes ever more crucial as increasing percentages of older cohorts are becoming active online in the last ten years (Hunsaker and Hargittai, 2018, passim).
As to tailor fake news countering initiatives accordingly, we first need to examine the relationship between age and fake news. Does the effectiveness of fake news depend on a person’s age? Are some forms of fake news more effective for older generations? This, while keeping confounding factors like political affiliation and age cohort in mind. The peculiarity of these questions is that the algorithms that control our social media news feed already implicitly know the answers to these questions. If there exists such a relationship, the articles will already be pushed to the age group that is most drawn to them. If such a relationship is found, the next step would be to examine the effects of medium preference (paper vs. screen), metacognition, age-related cognitive decline, and other factors that might be able to explain this relationship.
While speculative, if there is a relation between age and fake news it would be more sensible to target elderly people through two less digital routes for which we have several suggestions. First, provide courses relating to fake news detection as part of on-the-job training programs. Second, courses relating to fake news detection could be held at elderly homes, where they could not only serve an educational purpose, but also a social one. Elderly residents come together and socialize in the common room, meeting new people in the form of course teachers and assistants, and subsequently learning new skills and techniques—acquiring new toolkits. Toolkits could consist of techniques like “opening new tabs to check a website’s source, using factchecking [sic] and trusted news sites, and leaving sites with misspellings and odd-sounding domain names” (Nash, 2019.). Courses and workshops could also be given at libraries and community centers. An example of important teachable skills for 21st-century reading could be teaching lateral reading next to, or instead of, the more traditional vertical reading methods. This digital reading skill is valuable across cohorts, as it makes the larger public aware of how texts can be checked for reliability through a comparative (lateral) method, rather than sticking to the singular text in isolation of co-text and context. This method inherently facilitates multiperspectivity and media savviness strived for by other methods. Public awareness campaigns could also play a vital role; crucial information about fake news would reach older people via traditional media that they are more likely to use.
In conclusion, fake news is a term with many different possible meanings and just as many factors that play a role in its psychological susceptibility. Underlying the plethora of psychological factors are differences in metacognition, specifically (over)confidence in one’s own knowledge and intuition, that affect one’s susceptibility to fake news. The reading medium preference has in turn been shown to affect metacognition as interacting with digital media elicits a shallower form of processing and cognitive ‘retreat’, which is disconcerting knowing that fake news detection requires critical thinking and is a mentally challenging task.
However, age is very likely to play a considerable role in these susceptibility processes as well. For one, changes in metacognition, general cognitive ability, and reachers have observed a preference for paper reading for older generations, pointing to the plausibility that fake news affects older generations differently than other groups. Lastly, the cohort effect leads to yet another compounding factor in why older generations in particular might be subsequently more susceptible to fake news.
Regrettably, most current research into psychological factors influencing susceptibility to fake news does not take age-related differences in information processing into account. Our meta-analysis showed that 74% of behavioral studies looking at fake news largely ignore age (N = 62). Future research on fake news should fill this lacuna by examining the relationship between age and fake news. Additionally, further research should, based on this article’s profile, also take into consideration worldview-epistemological factors such as political affiliation and perhaps even religiosity. Naturally, this requires an interdisciplinary approach as more research should be done on how older generations use and are influenced by the internet in general (Hunsaker and Hargittai, 2018, passim). Lastly, to train older generations in successfully identifying fake news, alternative ways of education are needed–outside of the universities. Based on lifelong learning literature, we recommend providing toolkits, courses, and workshops at work, elderly homes, libraries, and community centers, as well as public awareness campaigns.
SG and ZO began their research on the effects of digital online reading (in contrast to ‘paper reading’) on reading comprehension under the supervision of MB. SV proposed fake news to add a specific focus to this dynamic in relation to new research debates. SG, ZO, and SV, upon categorizing the growing body of literature on fake news, decided collectively to write a review article of recent findings of said studies and their shared implications. One of these, in brainstorming, was the compounding factor of age, which to the authors appeared underappreciated in current literature. SG introduced the idea of a meta-analysis to systematically review how age is taken into account in current studies–the resulting table is primarily the shared work of SG and ZO. SG, in correspondence with the other authors, noted possible interventions. ZO brought in the cognitive psychological and metacognitive aspects of digital reading. SV included a psychosociological profile synthesis of current literature on fake news. MB brought in the linguistic aspects of fake news.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This publication came about as a result of a cross-disciplinary collaboration that evolved out of a series of seminars in the Graduate Honours Interdisciplinary Seminars program at Utrecht University. The title of that seminar was ‘The Future of Reading’ and was organized by MB and held in the spring of 2019. The authors of this study would like to thank the Utrecht University Honours College for their support in funding the publication of this study. They would also like to thank Tolly (Victoria E.) Eshelby for her suggestions and insights at an earlier stage of the project.
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fcomm.2021.661801/full#supplementary-material
Ackerman, R., and Goldsmith, M. (2011). Metacognitive Regulation of Text Learning: On Screen versus on Paper. J. Exp. Psychol. Appl. 17, 18–32. doi:10.1037/a0022086
Ackerman, R., and Lauterman, T. (2012). Taking reading Comprehension Exams on Screen or on Paper? A Metacognitive Analysis of Learning Texts under Time Pressure. Comput. Hum. Behav. 28. 5. 1816–1828. doi:10.1016/j.chb.2012.04.02310.1016/j.chb.2012.04.023
Allcott, H., and Gentzkow, M. (2017), Social Media and Fake News in the 2016 Election. J. Econ. Perspect. 31, 211–236. doi:10.3386/w23089
Antón, C., Camarero, C., and Rodríguez, J. (2013). Usefulness, Enjoyment, and Self-Image Congruence: The Adoption of E-Book Readers. Psychol. Mark. 30, 372–384. doi:10.1002/mar.20612
Bakir, V., and McStay, A. (2017). Fake News and the Economy of Emotions. Digital Journalism 6 (2), 154–175. doi:10.1080/21670811.2017.1345645
Baron, N. S. (2017). Reading in a Digital Age. Phi Delta Kappan 99, 15–20. doi:10.1177/0031721717734184
BBC News (2016a). Post-truth' Declared Word of the Year by Oxford Dictionaries. Available at: https://www.bbc.com/news/uk-37995600 (Accessed May 20, 2021).
BBC News (2016b). Fake News in 2016: What it Is, what it Wasn't, How to Help. Available at: https://www.bbc.com/news/world-38168792 (Accessed May 20, 2021).
Ben-Yehudah, G., and Eshet-Alkalai, Y. (2014). “The Influence of Text Annotation Tools on Print and Digital reading Comprehension,” in In Learning in the Technological Era: Proceedings of the 9th Chais Conference for Innovation in Learning Technologies. Editors Y. Eshet-Alkalai, A. Caspi, N. Geri, Y. Kalman, V. Silber-Varod, and Y. Yair, (Ra’anana: Open University of Israel Press), 28–35.
Benedetto, S., Drai-Zerbib, V., Pedrotti, M., Tissier, G., and Baccino, T. (2013). E-readers and Visual Fatigue. PloS ONE 8, e83676. doi:10.1371/journal.pone.0083676
Bondielli, A., and Marcelloni, F. (2019). A Survey on Fake News and Rumour Detection Techniques. Inf. Sci. 497, 38–55. doi:10.1016/j.ins.2019.05.035
Bronstein, M. V., Pennycook, G., Bear, A., Rand, D. G., and Cannon, T. D. (2019). Belief in Fake News Is Associated with Delusionality, Dogmatism, Religious Fundamentalism, and Reduced Analytic Thinking. J. Appl. Res. Mem. Cogn. 8. 1. 108–117. doi:10.1016/j.jarmac.2018.09.00510.1016/j.jarmac.2018.09.005
Burke, K. (1969a). A Grammar of Motives. Berkeley: University of California Press. doi:10.1525/9780520341715
Burke, M., and Bon, E. V. (2018). “The Locations and Means of Literary reading, in Expressive Minds and Artistic Creations,” in Expressive Minds and Artistic Creations: Studies in Cognitive Poetics. Editor S. Csábi (Oxford: Oxford University Press), 206–231.
Carrier, L. M., Cheever, N. A., Rosen, L. D., Benitez, S., and Chang, J. (2009). Multitasking across Generations: Multitasking Choices and Difficulty Ratings in Three Generations of Americans. Comput. Hum. Behav. 25 (2), 483–489. doi:10.1016/j.chb.2008.10.012
Chilton, P. (2004). Analysing Political Discourse: Theory and Practice. London: Routledge. doi:10.4324/9780203561218
Corgnet, B., Espín, A. M., and Hernán-González, R. (2016). Creativity and Cognitive Skills Among Millennials: Thinking Too Much and Creating Too Little. Front. Psychol. 7, 1–9. doi:10.3389/fpsyg.2016.01626
Costandi, M. (2016). Neuroplasticity. Cambridge, MA: The MIT Press. doi:10.7551/mitpress/10499.001.0001
Daniel, D. B., and Woody, W. D. (2013). E-textbooks at what Cost? Performance and Use of Electronic V. Print Texts. Comput. Edu. 62, 18–23. doi:10.1016/j.compedu.2012.10.016
Davis, F. B. (1944). Fundamental Factors of Comprehension in reading. Psychometrika 9 (3), 185–197. doi:10.1007/bf02288722
De Keersmaecker, J., and Roets, A. (2017). 'Fake News': Incorrect, but Hard to Correct. The Role of Cognitive Ability on the Impact of False Information on Social Impressions. Fake News’: Incorrect, but hard to correct. Intelligence, 65, 107–110. doi:10.1016/j.intell.2017.10.005
Deary, I. J., Corley, J., Gow, A. J., Harris, S. E., Houlihan, L. M., Marioni, R. E., et al. (2009). Age-associated Cognitive Decline. Br. Med. Bull. 92, 135–152. doi:10.1093/bmb/ldp033
Dizikes, P. (2018). Study: On Twitter, false news travels faster than true stories. Available at: https://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308 (Accessed June 29, 2021).
Duggan, G. B., and Payne, S. J. (2011). “Skim reading by Satisficing,” in CHI '11 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, (New York: Association for Computing Machinery), 1141–1150. doi:10.1145/1978942.1979114
Edelman Trust Barometer (2021). Edelman Trust Barometer. Available at: https://www.edelman.com/sites/g/files/aatuss191/files/2021-03/2021%20Edelman%20Trust%20Barometer.pdf (Accessed May 20, 2021).
Entman, R. M. (1991). Framing U.S. Coverage of International News: Contrasts in Narratives of the KAL and Iran Air Incidents. J. Commun. 41, 6–27. doi:10.1111/j.1460-2466.1991.tb02328.x
Entman, R. M. (1993). Framing: Toward Clarification of a Fractured Paradigm. J. Commun. 43, 51–58. doi:10.1111/j.1460-2466.1993.tb01304.x
Entman, R. M. (1989). How the media Affect what People Think: An Information Processing Approach. J. Polit. 51 (2), 347–370. doi:10.2307/2131346
Forgas, J. P., and Baumeister, R. F. (2019). The Social Psychology of Gullibility: Fake News, Conspiracy Theories, and Irrational Beliefs. New York; London: Routledge.
Gelfert, A. (2018). Fake News: A Definition. Il 38, 184–117. doi:10.22329/il.v38i1.5068
Ghanem, B., Ponzetto, S., Rosso, P., and Rangel, F. (2021). “FakeFlow: Fake News Detection by Modeling the Flow of Affective Information,,” in Proc. Of the 16th Conference of the European Chapter of the Association for Computational Linguistics. Editors P. Merlo, J. Tiedemann, and F. Tsarfaty, (Cambridge, Massachusetts: Association for Computational Linguistics), 1–18.
Ghanem, B., Rosso, P., and Rangel, F. (2020). An Emotional Analysis of False Information in Social media and News Articles. ACM Trans. Internet Technol. 20, 1–18. doi:10.1145/33817501
Giachanou, A., Ghanem, B., and Rosso, P. (2021). Detection of Conspiracy Propagators Using Psycho-Linguistic Characteristics. J. Inf. Sci., 016555152098548. doi:10.1177/0165551520985486
Giachanou, A., Ríssola, E. A., Ghanem, B., Crestani, F., and Rosso, P. (2020). The Role of Personality and Linguistic Patterns in Discriminating between Fake News Spreaders and Fact Checkers. Nat. Public Health Emerg. Collection, 181–192. doi:10.1007/978-3-030-51310-8_17
Giachanou, A., Rosso, P., and Crestani, F. (2019). “Leveraging Emotional Signals for Credibility Detection,” in Proc. Of the 42nd Int. ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR ’19) (Paris, France), 21–25. July. doi:10.1145/3331184.3331285,
Green, T. D., Perera, R. A., Dance, L. A., and Meyers, E. A. (2010). Impact of Presentation Mode on Recall of Written Text and Numerical Information: Hard Copy versus Electronic. North Am. J. Psychol. 12 (2), 233–242.
Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., and Lazer, D. (2019). Fake News on Twitter during the 2016 U.S. Presidential Election. Science 363, 374–378. doi:10.1126/science.aau2706
Gu, X., Wu, B., and Xu, X. (2015). Design, Development, and Learning in E-Textbooks: What We Learned and where We Are Going. J. Comput. Educ. 2, 25–41. doi:10.1007/s40692-014-0023-9
Guess, A., Nyhan, B., and Reifler, J. (2019a). Selective Exposure to Misinformation: Evidence from the Consumption of Fake News during the 2016. Available at: https://about.fb.com/wp-content/uploads/2018/01/fake-news-2016.pdf (Accessed May 20, 2021). U.S. Presidential Campaign. [Working paper, retrieved from].
Guess, A., Nagler, J., and Tucker, J. (2019b). Less Than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook. Sci. Adv. 5, eaau4586. doi:10.1126/sciadv.aau45861–8
Holzinger, A., Baernthaler, M., Pammer, W., Katz, H., Bjelic-Radisic, V., and Ziefle, M. (2011). Investigating Paper vs. Screen in Real-Life Hospital Workflows: Performance Contradicts Perceived Superiority of Paper in the User Experience. Int. J. Human-Computer Stud. 69 (9), 563–570. doi:10.1016/j.ijhcs.2011.05.002
Hunsaker, A., and Hargittai, E. (2018). A Review of Internet Use Among Older Adults. New Media Soc. 20 (10), 3937–3954. doi:10.1177/1461444818787348
Jia, X., Li, W., and Cao, L. (2019). The Role of Metacognitive Components in Creative Thinking. Front. Psychol. 10, 1–11. doi:10.3389/fpsyg.2019.02404
Kaufman, G. F., and Flanagan, M. (2013). Lost in Translation. Int. J. Gaming Computer-Mediated Simulations 5, 1–9. doi:10.4018/jgcms.20130701011
Kerr, M. A., and Symons, S. E. (2006). Computerized Presentation of Text: Effects on Children's Reading of Informational Material. Read. Writ 19, 1–19. doi:10.1007/s11145-003-8128-y1
Kretzschmar, F., Pleimling, D., Hosemann, J., Füssel, S., Bornkessel-Schlesewsky, I., and Schlesewsky, M. (2013). Subjective Impressions Do Not Mirror Online reading Effort: Concurrent EEG-Eyetracking Evidence from the reading of Books and Digital media. PLOS ONE 8, e56178. doi:10.1371/journal.pone.0056178
Kumar, S., and Shah, N. (2018). False Information on Web and Social media: A Survey. [arXiv preprint] arXiv:1804.08559.
Kuzmičová, A., Schilhab, T., and Burke, M. (2018). M-Reading: Fiction reading from mobile Phones. Convergence – Int. J. Res. into New Media Tech. 26 (2), 333–349. doi:10.1177/1354856518770987
Lauterman, T., and Ackerman, R. (2014). Overcoming Screen Inferiority in Learning and Calibration. Comput. Hum. Behav. 35, 455–463. doi:10.1016/j.chb.2014.02.046
Leeson, H. V. (2006). The Mode Effect: A Literature Review of Human and Technological Issues in Computerized Testing. Int. J. Test. 6, 1–24. doi:10.1207/s15327574ijt0601_1
Lin, C.-L., Wang, M.-J. J., and Kang, Y.-Y. (2015). The Evaluation of Visuospatial Performance between Screen and Paper. Displays 39, 26–32. doi:10.1016/j.displa.2015.08.002
Liu, Z. (2005). Reading Behavior in the Digital Environment. J. Documentation 61. 6. 700–712. doi:10.1108/0022041051063204010.1108/00220410510632040
Lohr, S. (2018). It’s True: False News Spreads Faster and Wider. And Humans Are to Blame. The New York Times. Available at: https://www.nytimes.com/2018/03/08/technology/twitter-fake-news-research.html (Accessed June 29, 2021).
Mangen, A., and Kuiken, D. (2014). Lost in an iPad. Ssol 4 (2), 150–177. doi:10.1075/ssol.4.2.02man
Mangen, A., Walgermo, B. R., and Brønnick, K. (2013). Reading Linear Texts on Paper versus Computer Screen: Effects on reading Comprehension. Int. J. Educ. Res. 58, 61–68. doi:10.1016/j.ijer.2012.12.002
Margolin, S. J., Driscoll, C., Toland, M. J., and Kegler, J. L. (2013). E-readers, Computer Screens, or Paper: Does Reading Comprehension Change across Media Platforms? Appl. Cognit. Psychol. 27, 512–519. doi:10.1002/acp.2930512
Monot, P. H., and Zappe, F. (2020). Contagion and Conviction: Rumor and Gossip in American Culture. Eur. j. Am. Stud. 15, 4. doi:10.4000/ejas.16356
Moustafa, K. (2016). Improving PDF Readability of Scientific Papers on Computer Screens. Behav. Inf. Tech. 35 (4), 319–323. doi:10.1080/0144929X.2015.1128978
Mueller, P. A., and Oppenheimer, D. M. (2014). The Pen Is Mightier Than the Keyboard. Psychol. Sci. 25, 1159–1168. doi:10.1177/0956797614524581
Mulderigg, J. (2012). The Hegemony of Inclusion: A Corpus-Based Critical Discourse Analysis of Deixis in Education Policy. Discourse Soc. 23 (6). 701–728. doi:10.1177/0957926512455377
Musgrove, A. T., Powers, J. R., Rebar, L. C., and Musgrove, G. J. (2018). Real or Fake? Resources for Teaching College Students How to Identify Fake News. Coll. Undergraduate Libraries 25 (3), 243–260. doi:10.1080/10691316.2018.1480444260
Nash, S. (2019). Teaching Older Americans to Identify Fake News Online. Available at: https://longevity.stanford.edu/teaching-older-americans-to-identify-fake-news-online/ (Accessed May 21, 2021).
NatCen Social Research (2017). The Vote to Leave the EU. Available at: https://www.bsa.natcen.ac.uk/media/39149/bsa34_brexit_final.pdf (Accessed May 21, 2021).
O'Brien, N., Latessa, S., Evangelopoulos, G., and Boix, X. (2018). The Language of Fake News: Opening the Black-Box of Deep Learning Based Detectors. Available at: https://dspace.mit.edu/handle/1721.1/120056 (Accessed January 22, 2021).
O'Connor, L., Bowles-Terry, M., Davis, E., and Holliday, W. (2010). “Writing Information Literacy” Revisited. Reference User Serv. Q. 49, 3, 225–230. doi:10.5860/rusq.49n3.225
Oshikawa, R., Qian, J., and Wang, W. Y. (2020). A Survey on Natural Language Processing for Fake News Detection. Available at: https://arxiv.org/abs/1811.00770 (Accessed January 22, 2021).
Palmer, E. C., David, A. S., and Fleming, S. M. (2014). Effects of Age on Metacognitive Efficiency. Conscious. Cogn. 28, 151–160. doi:10.1016/j.concog.2014.06.007
Pennycook, A. (1994). The Politics of Pronouns. ELT J. 48 (2), 173–178. doi:10.1093/elt/48.2.173
Pennycook, G., Cannon, T. D., and Rand, D. G. (2018). Prior Exposure Increases Perceived Accuracy of Fake News. SSRN J. 147, (12). 1–61. doi:10.2139/ssrn.2958246
Pennycook, G., and Rand, D. G. (2019a). Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than by Motivated Reasoning. Cognition 188, 39–50. doi:10.1016/j.cognition.2018.06.011
Pennycook, G., and Rand, D. G. (2019b). Who Falls for Fake News? the Roles of Bullshit Receptivity, Overclaiming, Familiarity, and Analytic Thinking. J. Pers 88 (2), 185–200. doi:10.1111/jopy.12476
Petty, R. E., and Cacioppo, J. T. (1986). Communication and Persuasion: The central and Peripheral Routes to Attitude Change. New York: Springer-Verlag. doi:10.1007/978-1-4612-4964-1
Pew Research Center. (2018). An Examination of the 2016 Electorate, Based on Validated Voters. Available at: https://www.pewresearch.org/politics/2018/08/09/for-most-trump-voters-very-warm-feelings-for-him-endured/ (Accessed May 21, 2021). Washington, D.C.: Pew Research Center - U.S. Politics & Policy.
Porion, A., Aparicio, X., Megalakaki, O., Robert, A., and Baccino, T. (2016). The Impact of Paper-Based versus Computerized Presentation on Text Comprehension and Memorization. Comput. Hum. Behav. 54, 569–576. doi:10.1016/j.chb.2015.08.002
Rapp, D. N., and Salovich, N. A. (2018). Can't We Just Disregard Fake News? the Consequences of Exposure to Inaccurate Information. Pol. Insights Behav. Brain Sci. 5 (2), 232–239. doi:10.1177/2372732218785193
Rashkin, H., Choi, E., Jang, J. Y., Volkova, S., and Choi, Y. (2017). “Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking,” in In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Editors M. Palmer, R. Hwa, and S. Riedel, (Cambridge, Massachusetts: Association for Computational Linguistics), 2931–2937. doi:10.18653/v1/D17-1317
Rockinson- Szapkiw, A. J., Courduff, J., Carter, K., and Bennett, D. (2013). Electronic versus Traditional Print Textbooks: A Comparison Study on the Influence of university Students' Learning. Comput. Edu. 63, 259–266. doi:10.1016/j.compedu.2012.11.022
Schugar, J. T., Schugar, H., and Penny, C. (2011). A Nook or a Book? Comparing College Students’ reading Comprehension Levels, Critical reading, and Study Skills. Int. J. Tech. Teach. Learn. 7 (2), 174–192.
Schuster, T., Schuster, R., Shah, D. J., and Barzilay, R. (2020). The Limitations of Stylometry for Detecting Machine-Generated Fake News. Comput. Linguistics 46 (2), 499–510. doi:10.1162/coli_a_00380
Sparrow, B., Liu, J., and Wegner, D. M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science 333, 776–778. doi:10.1126/science.1207745
Spohr, D. (2017). Fake News and Ideological Polarization. Business Inf. Rev. 34 (3), 150–160. doi:10.1177/0266382117722446
Sykora, M., Elayan, S., and Jackson, T. W. (2020). A Qualitative Analysis of Sarcasm, Irony and Related #hashtags on Twitter, Big Data Soc., 7(1), 205395172097273–15. doi:10.1177/2053951720972735
Tandoc, E. C., Lim, Z. W., and Ling, R. (2017). Defining "Fake News". Digital Journalism 6, 137–153. doi:10.1080/21670811.2017.1360143
Torabi Asr, F., and Taboada, M. (2019). Big Data and Quality Data for Fake News and Misinformation Detection. Big Data Soc. 6(1), 205395171984331–14. doi:10.1177/2053951719843310
Torres, R., Gerhart, N., and Negahban, A. (2018). Combating Fake News: An Investigation of Information Verification Behaviors on Social Networking Sites. Proc. 51st Hawaii Int. Conf. Syst. Sci.. doi:10.24251/hicss.2018.499
Traylor, T., Straub, J., Gurmeet, J., and Snell, N. (2019). “Classifying Fake News Articles Using Natural Language Processing to Identify In-Article Attribution as a Supervised Learning Estimator”, in IEEE 13th International Conference on Semantic Computing. doi:10.1109/ICOSC.2019.8665593
Van Dijk, T. A. (2002). “7. Political Discourse and Political Cognition,” in In Politics as Text and Talk: Analytical Approaches to Political Discourse. Editors P. Chilton, and C. Schaffner (Amsterdam: John Benjamins), 203–237. doi:10.1075/dapsac.4.11dij
Van Dijk, T. A. (1987). “Mediating Racism: The Role of the media in the Reproduction of Racism,” in Language, Power and Ideology. Editor R. Wodak (Amsterdam: John Benjamins), 199–226.
Verdi, M. P., Crooks, S. M., and White, D. R. (2002). Learning Effects of Print and Digital Geographic Maps. J. Res. Tech. Edu. 35, 290–302. doi:10.1080/15391523.2002.10782387
Vosoughi, S., Roy, D., and Aral, S. (2018). The Spread of True and False News Online. Science 359, 1146–1151. doi:10.1126/science.aap9559
Wineberg, S., and McGrew, S. (2017). Lateral reading: Reading Less and Learning More when Evaluating Digital Information. Stanford History Education Group Working. Paper No. 2017-A1. doi:10.2139/ssrn.3048994
Wodak, R. (1987). “The Power of Political Jargon: A ‘Club-2’ Discussion,” in Language, Power and Ideology. Editor R. Wodak (Amsterdam: John Benjamins), 137–164.
Zhang, D., and Zadorozhny, V. I. (2020). “Fake News Detection Based on Subjective Opinions,” in Advances in Databases and Information Systems. Editors J. Darmont, B. Novikov, and R. Wrembel (Switzerland: Springer International Publishing), 108–121. doi:10.1007/978-3-030-54832-2_10
Keywords: fake news, review, digital literacy, age, cognition, linguistics, psychology, metacognition
Citation: Gaillard S, Oláh ZA, Venmans S and Burke M (2021) Countering the Cognitive, Linguistic, and Psychological Underpinnings Behind Susceptibility to Fake News: A Review of Current Literature With Special Focus on the Role of Age and Digital Literacy. Front. Commun. 6:661801. doi: 10.3389/fcomm.2021.661801
Received: 31 January 2021; Accepted: 24 June 2021;
Published: 07 July 2021.
Edited by:Antonio Benítez-Burraco, Sevilla University, Spain
Reviewed by:Paolo Rosso, Universitat Politècnica de València, Spain
Marc Rader, Independent researcher, MD, United States
Copyright © 2021 Gaillard, Oláh, Venmans and Burke. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Michael Burke, firstname.lastname@example.org
†These authors have contributed equally to this work and share first authorship
‡This author share senior authorship