ORIGINAL RESEARCH article

Front. Hum. Neurosci., 24 February 2026

Sec. Cognitive Neuroscience

Volume 20 - 2026 | https://doi.org/10.3389/fnhum.2026.1750499

Authority reliance vs. deliberative assessment in processing online rumors: evidence from fNIRS

  • 1. School of Business Management, Zhejiang Financial College, Hangzhou, China

  • 2. School of Business, Anhui Xinhua University, Hefei, China

  • 3. School of Economics and Management, Anhui Polytechnic University, Wuhu, China

Article metrics

View details

134

Views

40

Downloads

Abstract

Purpose:

This study aimed to clarify how the authority of a fact-checker shapes neurocognitive processing of online rumors. Specifically, this study examined differences in neural responses to corrections provided by authoritative and non-authoritative sources.

Approach:

Functional near-infrared spectroscopy (fNIRS) was used to measure neural activity in the prefrontal cortex while participants evaluated information that had been fact-checked by either authoritative or non-authoritative third-party sources. Behavioral metrics, such as judgment accuracy, were collected alongside neural data to correlate brain activity with decision-making outcomes.

Results/findings:

Authoritative fact-checkers produced stronger activation in the left prefrontal cortex (LPFC) and improved overall judgment accuracy, suggesting a cognitive “fast track” that facilitates information acceptance. This enhanced accuracy was accompanied by increased LPFC engagement, indicating deeper analytical engagement. For true information, non-authoritative fact-checking led to reduced right prefrontal cortex (RPFC) activation and only marginal behavioral improvements, suggesting participants relied on heuristic shortcuts or “cognitive offloading” rather than rigorous deliberation. During false information processing, RPFC activation decreased across specific channels (e.g., Ch19), with non-authoritative sources yielding higher false-information judgment accuracy (59%) compared to authoritative sources (55%). This paradoxical effect suggests that lower source credibility can, in certain contexts, elicit more vigilant evaluation of false claims. The neural and behavioral responses to authoritative versus non-authoritative sources varied based on information veracity, consistent with cognitive dissonance theory, which posits adaptive shifts in processing strategies in response to credibility cues.

Value:

By linking source credibility to distinct neural signatures and accuracy outcomes, this work provides a neurocognitive account of how fact-checker authority influences belief updating. The findings highlight that credibility cues can promote heuristic acceptance or more careful analysis, depending on the situation. Furthermore, this evidence can inform more effective rumor-intervention strategies that are sensitive to both source attributes and information type.

Introduction

The rapid proliferation of online rumors has emerged as a pressing concern in academia and various fields (Pal et al., 2019; Pennycook et al., 2020). Rumors are defined as information and news lacking confirmation or certainty regarding factual accuracy (Bondiell and Marcelloni, 2019). Sharing messages of unknown veracity may cause panic and anxiety among the public, especially when they eventually turn out to be false (Li and Sakamoto, 2014). The proliferation of online information has imposed a constant responsibility to determine its credibility. Notably, individuals tend to rely on the perceived authority of a source, demonstrating higher trust in the information if it originates from an expert or official institution (Moran and Nechushtai, 2022; Tandoc et al., 2018; Yang et al., 2026; Kaňková and Matthes, 2026; Bolsen and Druckman, 2015). While efficient, this heuristic can lead to passive acceptance without deeper thought (Kaňková and Matthes, 2026). Understanding the cognitive and neural basis of this process, especially when people evaluate corrective information from sources of varying credibility, is crucial for addressing the spread of rumors.

Prior research found that the use of denials is an effective strategy to debunk rumors (Pal et al., 2019; Bordia et al., 2005). Adding warning labels has proven useful in enhancing individuals’ critical thinking ability, thereby curtailing the spread of rumors (Pal et al., 2019; Pennycook et al., 2020). Bolsen and Druckman (2015) revealed that warnings are more effective than corrections in countering directional reasoning regarding scientific claims. Furthermore, Pennycook et al. reported that warning labels can significantly improve accuracy in identifying political information (Pennycook et al., 2020). Notably, the impact of the authority of third-party fact-checkers on information dissemination and public perception remains poorly understood.

Across platforms such as social media, news reports, online forums, and expert blogs, fact-checking efforts exert inconsistent effects on users’ beliefs (Bhattacherjee, 2023). Factors such as the authority and professionalism of third-party fact-checkers significantly shape users’ cognitive frameworks and information processing strategies to varying degrees (Xu et al., 2023). Prike et al. (2024) found that authoritative labels serve as effective interventions, reducing the spread of false information and enhancing users’ ability to discern true from false information. For instance, official media outlets, due to their authority and credibility, tend to have their messages more readily accepted and internalized as personal beliefs by users despite containing imperfections. This process is intricately linked to psychological mechanisms such as trust transfer and cognitive biases (Feng et al., 2023; Moore et al., 2021). In contrast, social media information is more fragmented and diverse, necessitating users to sift through and assess authenticity and reliability on their own, which increases cognitive load and may lead to but also risks information overload and misjudgment (Qiang and Hu, 2025). Thus, investigating how the authority of third-party fact-checkers influences individual cognitive processing holds significance for understanding decision-making behaviors of information recipients.

The authority of an information source depends on the public’s trust in the media organization and the perceived reliability of the information it provides (Moran and Nechushtai, 2022). Research on trust building indicates that in digital environments, the identity cues of an authoritative platform has serve as a crucial signal for establishing credibility (Tandoc et al., 2018). To examine how source credibility shapes belief updating, this study focused on the dimension of institutional authority. This concept captures the influence a publication derives from its official status, its recognized role within a media institution, and the broad public trust it consequently holds (Tandoc et al., 2018). To represent different levels of such authority, two contrasting news sources were selected: People’s Daily and Jiupai News. This choice is grounded in empirical media credibility research in China, which documents a pronounced “stronger central, weaker local” credibility gradient, where central-level outlets consistently hold a significant trust advantage over local ones (Yang et al., 2026; Qiang and Hu, 2025). People’s Daily exemplifies a central-level, high-authority source. In contrast, Jiupai News is a market-oriented online platform operated by a local press group, aligning with the “weaker local” credibility category. The core contrast between these sources lies in their fundamental institutional position and perceived authority, not in specific content. This study design enables specific isolation to test the effect of institutional authority on how people process fact-checking information.

Traditional research on rumor denial has heavily relied on self-reported measures and behavioral outcomes, such as whether people believe or share a corrected claim (Pal et al., 2019; Li and Sakamoto, 2014; Moran and Nechushtai, 2022). These studies observe what decisions are made, but offer limited insight into the underlying cognitive processes. When analysis is limited to final choices, real-time cognitive dynamics, such as conflict monitoring or trust calibration, remain largely unexplored. Elucidating these processes requires methods that can capture the brain’s activity in real time while judgments are formed. Psychophysiological tools, such as brain imaging tools, have received growing attention due to their ability to complement other sources of data, including self-reported data (i.e., questionnaires to investigate rumors behavior and rumors belief) (Pennycook et al., 2020; Ding et al., 2024). Prior neuroscience work on credibility has largely focused on different questions, using electroencephalography (EEG) or functional magnetic resonance imaging (fMRI) to study memory-based truth judgment or the continued influence of corrected misinformation (Düzel et al., 1997; Gordon et al., 2019). Direct analysis of the neural correlates of rumor cognition remains scarce (Ding et al., 2024; Gordon et al., 2019; Kawiak et al., 2020; McClellan et al., 2026).

In contrast to other methods, functional near-infrared spectroscopy (fNIRS) offers unique non-invasive monitoring advantages for unraveling the mysteries of cognitive processing in the brain (Quaresima and Ferrari, 2019). fNIRS precisely captures dynamic changes in oxy-hemoglobin (HbO) and deoxy-hemoglobin (HbR) concentrations within the cerebral cortex, providing real-time feedback through these biomarkers (Herold et al., 2018). Due to its non-invasiveness, high temporal resolution, and good spatial localization capabilities, fNIRS has emerged as an essential tool in cognitive science (Waight et al., 2024; Chandra and Choudhury, 2023). fNIRS is well-suited for this purpose, as it enables the measurement of neural engagement in realistic tasks, focusing on the prefrontal cortex (PFC), a brain area vital for critical thinking, evaluating credibility, and resolving conflicting information (McClellan et al., 2026; Quaresima and Ferrari, 2019). Similar to functional magnetic resonance imaging (fMRI), fNIRS accurately detects and quantifies subtle variations in regional hemoglobin concentrations within the brain, allowing the analysis of specific areas following activation by cognitive tasks. These changes directly reflect neural activity intensity and offer intuitive and robust evidence for understanding dynamic interactions within brain functional networks, as well as the neural underpinnings of cognitive processes (Zinos et al., 2024). Moravec et al. utilized EEG and found that a fake news label on a headline did not influence veracity judgment, and that labeling headlines as false did not influence users’ beliefs (Moravec et al., 2019). Another study by McClellan et al. (2026) reported increased activation in the PFC when participants were exposed to misinformation that conflicted with their attitudes. Despite the broad adoption of fNIRS (Quaresima and Ferrari, 2019), its application in exploring how source authority shapes the brain’s response during fact-checking is novel.

Following Moravec et al. (2019), our study is grounded in cognitive dissonance theory and examines how it influences behavior in online rumors. As stated by Moravec et al., this situation can create cognitive dissonance, where a user is inclined to believe a fake headline, despite it being labeled as false. According to cognitive dissonance theory, individuals experience psychological discomfort when confronted with information that contradicts their existing beliefs or initial judgments (Festinger, 1962). To resolve this discomfort, an intuitive, fast response or analytical processing may be triggered to reduce this dissonance through belief adjustment or source discounting (Moravec et al., 2019; Kahneman, 2012). When the issue is perceived as low in personal relevance, dissonance is often minimized or dismissed, defaulting to their prior beliefs (Nickerson, 1998). However, when motivated, individuals invest additional cognitive resources to reconcile the contradiction—a process that demands more time and mental effort (Festinger, 1962). In summary, this study posits that false or true labels assigned to rumors that are aligned or misaligned with a person’s beliefs by authoritative vs. non-authoritative parties will trigger cognitive dissonance.

To elucidate these neurocognitive mechanisms, fact-checking sources were operationalized along the dimension of authority (authoritative vs. non-authoritative), and their interactions with information veracity (true vs. false) were examined. Within the framework of cognitive dissonance, this study hypothesizes that corrections from a high-authority source may trigger a stronger initial neural conflict signal, but also a quicker pathway to belief updating to resolve discomfort. Conversely, corrections from a low-authority source might induce a different pattern of neural engagement, potentially eliciting more sustained evaluative processing. Using fNIRS, participants’ judgment accuracy and changes in PFC activation were monitored following different fact-checking prompts. By exploring the relationship between fact-checker authority and real-time cognitive processing in the brain, this study aims to reveal the neural underpinnings of how authority shapes dissonance resolution and decision-making. The findings provide neuroscientific evidence to inform strategies for effective communication and trust calibration in digital information environments.

Materials and methods

The study was approved by the Ethics Committee of the Institute of Neuroscience and Cognitive Psychology of Anhui Polytechnic University (AHPU-SEM-2021-002) and conducted under the ethical standards of the Declaration of Helsinki. All participants voluntarily signed the informed consent form after reading and fully comprehending its contents.

Participants

An a priori power analysis was conducted using G*Power (Faul et al., 2007) to determine the requisite sample size. Based on the study design, repeated ANOVA with two within-subject factors (source and label, each with two levels) was used to calculate the sample size. A target sample of 36 participants was determined based on an effect size of 0.25, an alpha level (α) of 0.05, and a statistical power (1 − β) of 0.95. To ensure data integrity and experimental reliability, 42 participants (19 females, 23 males; mean age = 22.4 years, SD = 2.59 years) were recruited. The subjects were undergraduate or graduate students with relevant academic backgrounds and experimental competence. First, the participants’ health status was screened to confirm that none suffered from neurological or psychiatric disorders, ensuring normal vision or appropriate correction. Before starting the experiment, the experimental procedures and the fundamental principles of fNIRS technology were thoroughly explained to all participants.

Stimulus material and experimental design

Initially, 24 pieces of information were selected from current social media, covering two common themes: health and technology. A total of 111 undergraduate students (58 male, 53 female, Mage = 22.0 years, SD = 2.3) were recruited to participate in a pre-experiment. All participants evaluated the authenticity (true/false) of these pieces of information. The results showed that the overall judgment accuracy (45.31% for true information, 54.17% for false information) approximated chance performance, indicating that the information possessed substantial ambiguity in terms of truthfulness and was suitable for investigating the influence of external sources. Based on these results, four representative pieces of true information and four pieces of false information were selected, ultimately forming a set of 8 core experimental materials. To control for potential confounding variables, quantitative balance checks on the final 8 selected materials were conducted. An independent-samples t-test revealed no significant difference in the average word count between the true and false information (Mtrue = 18.75 words, SD = 1.26; Mfalse = 17.75 words, SD = 0.96; t(6) = 1.265, p = 0.253).

This study employed E-Prime 3.0 software to develop the experimental protocol. The materials used and experimental protocol can be downloaded from https://pan.baidu.com/s/1xl-RywO_TFppjtelTM5LWg?pwd=acdi. Previous research reported that the cognitive processing differs fundamentally between statements containing true versus false conceptual features (Marques et al., 2009). Therefore, in the experimental design, and to ensure the objectivity and accuracy of the results, two pieces of true information were randomly assigned the label of authoritative third-party fact-checkers (People’s Daily), whereas the other two pieces of true information were assigned the label of non-authoritative third-party fact-checkers (Jiupai News). The same manipulation was applied to the false information to maintain balance across the experimental conditions. The authority of a news source relies primarily on the public’s high degree of trust in the media organization (Moran and Nechushtai, 2022). Hence, the identity marker of an authoritative platform serves as a key trust signal (Tandoc et al., 2018). Recent studies have shown a significant difference in authority between local and central media (Yang et al., 2026; Qiang and Hu, 2025). These findings also confirm the successful operationalization of source authority in our experiment.

Before the formal experiment, all participants completed a pre-experiment practice session. This session consisted of 4 distinct stimulus materials, presented sequentially and repeated only once. The practice session included one material for each of the two information types and one for each information source, using independent materials not included in the formal experiment. During the practice, each stimulus was presented for 5 s, which then disappeared automatically. Within these 5 s, participants were required to judge the authenticity of the labeled information. After the participant’s response, the experimenter provided oral feedback to ensure they understood the task requirements. The goal of this experimental paradigm was to capture the hemodynamic response associated with processing information from different sources. The experiment employed a block design, presenting a total of 16 information items. Block 1 contained 4 true information items and 4 false information items. In block 2, among the 4 true information items, two were labeled as from People’s Daily and judged as true, and two were labeled as from Jiupai News and judged as true; among the 4 false information items, two were labeled as from People’s Daily and judged as false, and two were labeled as from Jiupai News and judged as false. Throughout the experiment, Block 1 and Block 2 were presented in sequence, with the items within each block randomized.

As shown in Figure 1, each stimulus followed a structured timeline. The information was presented for 5 s, during which participants were required to read and judge its authenticity within the five-second window. The participants responded by pressing the number 0 on the keyboard for information deemed true, and the number 1 for information deemed false. Considering that the standard hemodynamic response requires approximately 10 s after reaching its peak to return to baseline levels (Kaiser et al., 2014; Leff et al., 2006; Malonek and Grinvald, 1996), a 10-s blank rest interval was allowed after each judgment. Before the next trial, a fixation cross was displayed on the screen for 1 s. To obtain a robust hemodynamic response for signal averaging, each unique combination was presented five times in random order within its respective condition block. Throughout the experiment, fNIRS data were continuously recorded for approximately 30 min per participant, ensuring data completeness and accuracy, and laying a solid foundation for subsequent data analysis and research. The raw data can be downloaded from https://cstr.cn/31253.11.sciencedb.35305.

Figure 1

Data acquisition

The study employed the NIRSport 2 near-infrared spectroscopy (NIRS) brain functional imaging system (Nanjing Jian Chuang Technology Co., Ltd., China), which synergized LED light sources and advanced active detector technology for wearable brain functional imaging. The NIRSport 2 not only achieves exceptional portability but also significantly enhances the flexibility and precision of experimental design through its built-in standardized probe positioning mechanism and flexible custom configuration capabilities. The system’s multiple digital I/O trigger options ensure precise capture and recording of triggering events, laying a solid foundation for the timeliness and accuracy of experimental data. Furthermore, its real-time data stream display function allows researchers to monitor the experimental progress instantly and adjust experimental parameters, ensuring research efficiency. Notably, the NIRSport 2 demonstrates remarkable compatibility, seamlessly integrating with diverse brain functional imaging approaches such as EEG, fMRI, TMS (transcranial magnetic stimulation), and eye-tracking systems (Li et al., 2022; Esposito et al., 2020; Zhu and Lv, 2023). This enables simultaneous acquisition and analysis of multimodal data, offering unprecedented insights into the complex functional networks of the brain.

This study utilized the NIRSport 2 system to collect hemodynamic signals from the PFC. The system configuration consisted of 8 light sources and 8 detectors, forming a total of 23 effective measurement channels. The distance between the light sources and detectors was strictly controlled at 30 mm to achieve an optimal balance between signal penetration depth and spatial resolution. The sampling frequency was set at 10.17 Hz. The probe array was positioned according to the international 10–20 EEG placement system. Specifically, a detector in the central column was fixed at the Fpz point, serving as the reference landmark. The entire probe array symmetrically covered the bilateral prefrontal regions, ensuring complete coverage of the target Brodmann areas (BA 8, 9, 10, 11, 45, 46). Collectively, these regions constitute the core of the prefrontal cortex, playing pivotal roles in executing high-level cognitive functions, including but not limited to thought and intuition processing, encoding and retrieval of memory information, and formulation of problem-solving strategies. They are intimately connected with the limbic system, jointly modulating individuals’ behavior (Panagiotaropoulos, 2024). The probe layout is detailed in Figure 2, and the corresponding Brodmann areas for each channel are listed in Table 1. Prior to the formal experiment, each participant wore the head cap, and calibration was performed using the system’s built-in signal quality check function. This process displayed the signal intensity of each channel in real-time. The contact pressure between the probes and the scalp was adjusted until the raw light intensity signals of all channels stabilized and the signal-to-noise ratio reached an acceptable level, after which formal data recording commenced.

Figure 2

Table 1

ChannelBrodmann area (BA)MNI coordinatesOverlapROI assignment
XYZ
Ch110 - Frontopolar area2.66768.66713.3331FP-ROI
Ch211 - Orbitofrontal area−11.66773−4.3330.556OFC-ROI
Ch311 - Orbitofrontal area14.66773−4.6670.583OFC-ROI
Ch49 - Dorsolateral prefrontal cortex1.66755.33340.3330.909DLPFC-ROI
Ch58 - Includes frontal eye fields1.66730.667591NONE
Ch69 - Dorsolateral prefrontal cortex−10.66746.33351.6670.775DLPFC-ROI
Ch79 - Dorsolateral prefrontal cortex13.33346.66752.3330.776DLPFC-ROI
Ch810 - Frontopolar area−146823.6671FP-ROI
Ch910 - Frontopolar area−25.33368.6673.6670.625FP-ROI
Ch1046 - Dorsolateral prefrontal cortex−24.66756.66734.3330.506DLPFC-ROI
Ch1146 - Dorsolateral prefrontal cortex−4255.667150.832NONE
Ch129 - Dorsolateral prefrontal cortex−29.66744.66742.6670.876DLPFC-ROI
Ch1345 - Pars triangularis Broca’s area−4744.66723.6670.759NONE
Ch1411 - Orbitofrontal area−3464.333−9.3330.469OFC-ROI
Ch1546 - Dorsolateral prefrontal cortex−48.66750−10.911NONE
Ch1610 - Frontopolar area17.33368.33324.6671FP-ROI
Ch1710 - Frontopolar area29.333694.3330.656FP-ROI
Ch189 - Dorsolateral prefrontal cortex27.33356.66734.6670.461DLPFC-ROI
Ch1946 - Dorsolateral prefrontal cortex45.33354.66716.3330.852DLPFC-ROI
Ch209 - Dorsolateral prefrontal cortex33.66744.66742.6670.857DLPFC-ROI
Ch2145 - Pars triangularis Broca’s area5042.66725.6670.894NONE
Ch2211 - Orbitofrontal area37.66764.667−9.3330.403OFC-ROI
Ch2346 - Dorsolateral prefrontal cortex51.33349.6670.6670.849NONE

MNI coordinates of fNIRS channels.

The acquired raw light intensity data were preprocessed using the Homer2 toolbox (MATLAB) with the following steps:

  • Raw Data Inspection: The Homer2 toolkit was used to perform an initial check on the acquired raw NIRS data to ensure data integrity and quality, excluding obvious anomalies or corrupted data.

  • Conversion of Light Intensity to Optical Density (OD): The raw light intensity data were converted to optical density values.

  • Motion Artifact Detection and Correction: The Spline interpolation method was employed for motion artifact detection and correction on the OD data. By identifying abnormal jump points in the signal, Spline interpolation was used to smooth the artifact-affected regions, effectively reducing the impact of motion artifacts on data quality.

  • Bandpass Filtering: Bandpass filtering was applied to the corrected OD data. The filtering range was typically set from 0.01 Hz to 0.5 Hz to remove low-frequency drifts (e.g., instrumental drift or physiological low-frequency noise) and high-frequency noise (e.g., cardiac or respiratory interference), preserving the effective signal components related to brain activity.

  • Conversion of Optical Density Back to Light Intensity: The filtered OD data were converted back to light intensity data for subsequent analysis and visualization. This step provides standardized data for subsequent signal averaging and statistical analysis.

Definition and rationale for region of interest selection

Based on the core functional localization of the prefrontal cortex (PFC) in “information credibility assessment,” “cognitive conflict monitoring,” and “decision-making control” (Panagiotaropoulos, 2024), and combined with the core research question of this study, the regions of interest (ROIs) in this study were defined through the following two steps: First, with reference to the corresponding relationship between the fNIRS probe layout and Brodmann areas (Table 1), functional subregions within the PFC that are directly related to “credibility judgment” and “conflict resolution” were identified. Second, drawing on reports of brain regions associated with “information source evaluation” in previous neuroimaging studies (Kawiak et al., 2020; McClellan et al., 2026), channels irrelevant to the cognitive processes of this study were excluded, ultimately forming 3 core ROIs. The details are as follows:

Dorsolateral prefrontal cortex ROI

The dorsolateral prefrontal cortex (DLPFC, corresponding to Brodmann areas 9/46, BA9/46) is a core brain region for cognitive control and working memory integration, responsible for logical analysis, conflict resolution, and adjustment of decision-making strategies in the judgment of information authenticity. In this study, this region is involved in the “matching assessment between the authority cues of fact-checkers and the authenticity of information,” and serves as a key region for verifying the hypothesis that “authoritative sources improve judgment accuracy by reducing cognitive conflict.” According to the corresponding relationship between channels and Brodmann areas in Table 1, the dorsolateral prefrontal cortex ROI (DLPFC-ROI) includes the following 8 channels: Ch4 (BA9), Ch6 (BA9), Ch7 (BA9), Ch10 (BA46), Ch12 (BA9), Ch18 (BA9), Ch19 (BA46), and Ch20 (BA9).

Frontopolar area ROI

The frontopolar area (corresponding to Brodmann area 10, BA10) is responsible for handling the “conflict coordination between information source credibility and content authenticity.” Especially when “external cues (e.g., source authority) are inconsistent with internal cognition (e.g., prior beliefs),” this region regulates the information integration process by enhancing activation (Panagiotaropoulos, 2024). In this study, this region is involved in the “processing of cognitive uncertainty induced by non-authoritative sources” and is a key region for explaining the phenomenon that “non-authoritative sources achieve higher judgment accuracy for false information.” According to Table 1, the frontopolar area ROI (FP-ROI) includes the following 5 channels: Ch1 (BA10), Ch8 (BA10), Ch9 (BA10), Ch16 (BA10), and Ch17 (BA10). Moravec et al. (2019) found in a study on false information processing that the activation of the frontopolar area is significantly correlated with the “vigilance assessment of low-credibility sources.”

Orbitofrontal cortex ROI

The orbitofrontal cortex (corresponding to Brodmann area 11, BA11) is involved in emotional credibility assessment and decision-making bias regulation, and is responsible for transforming the “subjective trust in source authority” into specific judgment behaviors (Feng et al., 2023). In this study, this region is involved in the “trust transfer process toward People’s Daily (a high-authority source)” and is a key region for explaining the phenomenon that “authoritative sources improve the judgment accuracy of true information.” According to Table 1, the Orbitofrontal Cortex ROI (OFC-ROI) includes the following 4 channels: Ch2 (BA11), Ch3 (BA11), Ch14 (BA11), and Ch22 (BA11). Moore et al. (2021) confirmed through fMRI studies that the activation intensity of the orbitofrontal cortex is positively correlated with the “degree of trust in official media.”

All statistical analyses of fNIRS data in this study (including Friedman test and Nemenyi post-hoc test) were limited to the 17 channels of the above three ROIs. Among the total 23 channels, 6 channels (Ch5/Ch11/Ch13/Ch15/Ch21/Ch23) were excluded as they are irrelevant to the cognitive processes of this study. None of these channels are associated with the “impact of source authority on rumor processing” (the core focus of this study). This ROI definition strategy ensures the “functional specificity” and “theoretical relevance” of statistical analyses, avoids the risk of false positives caused by undifferentiated analysis of all channels, and is highly consistent with the hypothetical framework of this study based on cognitive dissonance theory.

Results

Participant performance and subjective rating

This study conducted a thorough statistical analysis of the participants’ behavioral data, aiming to evaluate their accuracy in judging information. To ensure the rigor and precision of our analysis, the chi-square test was employed as a statistical method. The chi-square test is particularly suited for determining the presence of significant differences in the distribution of data across different groups regarding a specific characteristic (Franke et al., 2012). Participants’ judgment accuracy data were analyzed using chi-square tests of independence. This non-parametric test was selected due to the data consisting of frequency counts of categorical outcomes (i.e., the number of “true” vs. “false” judgments) under different experimental conditions (e.g., different source labels). The chi-square test is the appropriate method for determining whether the distribution of these categorical responses is independent of the experimental conditions. This approach directly addresses our primary behavioral research question: whether the source of fact-checking influences the proportion of “true” judgments. Before the analysis, the assumptions underlying the chi-square test were verified, including that all expected cell frequencies were greater than five.

Accuracy in identifying true information

In the absence of third-party fact-checker (T-NS) labeling, participants’ accuracy in assessing true information was merely 37%. However, upon introducing non-authoritative third-party fact-checkers (T-JS), specifically Jiupai News as the fact-checker, the participants’ accuracy improved to 45%. Notably, upon labeling with authoritative third-party fact-checkers (T-RS), namely People’s Daily, the participants’ accuracy in judging true information significantly increased to 62%. A chi-square test analysis revealed a significant main effect of the source condition on judgment accuracy for true information, χ2(2, N = 264) = 11.092, p = 0.004.

These findings indicated that the inclusion of third-party fact-checkers enhances participants’ accuracy in information judgment, with the addition of authoritative third-party fact-checkers significantly increasing the accuracy and credibility of information. Furthermore, the accuracy of authoritative third-party fact-checkers (People’s Daily) was significantly higher than that of non-authoritative third-party fact-checkers (Jiupai News). This difference confirmed the profound impact of the authority of third-party fact-checkers on participants’ judgments (Xu et al., 2023).

Accuracy in identifying false information

In the absence of third-party fact-checkers (F-NS), participants’ accuracy in judging false information was 36%. However, upon introducing third-party fact-checkers, a significant improvement in judgment accuracy was observed. Specifically, labeling with a non-authoritative third-party fact-checkers (F-JS, Jiupai News) resulted in an increase to 59% in the accuracy of recognizing false information. Similarly, labeling with authoritative third-party fact-checkers (F-RS, People’s Daily) resulted in an increase to 55% in the accuracy of recognizing false information. However, this increase was slightly lower than the non-authoritative counterpart.

The results of this study highlight a crucial finding. In the context of false information, both the introduction of authoritative and non-authoritative third-party fact-checkers effectively and significantly enhanced participants’ judgment accuracy. This discovery contradicts the pattern observed in the processing of true information, where the addition of either authoritative or non-authoritative third-party fact-checkers exerted no notable difference in judgment accuracy. This phenomenon may be explained by the cognitive dissonance theory (Metzger et al., 2020), suggesting that varying levels of authority in third-party fact-checkers elicit distinct psychological adjustments while identifying and processing false information.

Hemodynamic responses

All statistical analyses of hemodynamic responses were restricted to three predefined regions of interest (ROIs), including the dorsolateral prefrontal cortex (DLPFC)-ROI (Channels Ch4/Ch6/Ch7/Ch10/Ch12/Ch18/Ch19/Ch20), the frontopolar (FP)-ROI (Channels Ch1/Ch8/Ch9/Ch16/Ch17), and the orbitofrontal cortex (OFC)-ROI (Channels Ch2/Ch3/Ch14/Ch22). The selection of these ROIs was based on the theory of prefrontal cortex functional partitioning and previous studies on information source credibility assessment. This ensured that the analysis focused on brain regions directly associated with “source authority judgment” and “cognitive conflict resolution,” while excluding irrelevant channels to avoid interference with the results. This study relies on the quantification of changes in HbO, which serves as a critical indicator for estimating dynamic alterations in cerebral blood flow (CBF) within activated brain regions (Meek, 2002). In the data acquisition process, an optimized Beer–Lambert law approach was employed to accurately extract the concentration changes of oxyhemoglobin from the raw signals (Chance et al., 1998). To enhance data reliability, the Homer2 software package was utilized (Huppert et al., 2009), converting light signal intensities into optical density (OD) values. Subsequently, detected motion artifacts were meticulously removed using Spline interpolation, ensuring that the data remained unaffected by non-physiological factors. To further refine signal quality, bandpass filtering was implemented, which effectively suppressed noise components while preserving signal frequencies pertinent to physiological variations. Upon completion of these preprocessing steps, the OD values were converted into concentration values, and block averaging was performed, aiming to mitigate random errors and enhance the statistical robustness of the data. Finally, comprehensive data analyses were conducted within the MATLAB software environment, comprehensively elucidating the relationship between HbO variations and brain functional activities. As fNIRS measures infrared light intensity, multiple signal-processing and conversion steps were required to transform the raw intensity data into final measurements, reported in micromolar (μM) concentrations.

Effect of third-party fact-checkers on the evaluation of authentic information

This study employed the Friedman test to investigate whether statistically significant differences exist among multiple paired quantitative data across three channels: T-NS, T-JS, and T-RS. This non-parametric test was selected for the following reasons: Firstly, the data structure represented a repeated-measures design (the same participant underwent all conditions), with a continuous dependent variable (the change in HbO concentration). To determine whether the data were suitable for parametric tests (repeated-measures ANOVA), the Shapiro–Wilk test was first conducted on the difference scores to assess normality. The test results indicated that the data from a considerable number of channels violated the assumption of normal distribution. Consequently, the Friedman test was adopted as a robust non-parametric alternative to examine whether differences exist among multiple related samples (i.e., different source conditions). This method does not rely on specific assumptions regarding the shape of the data distribution.

To control the family-wise error rate associated with multiple pairwise comparisons within each channel, post-hoc pairwise comparisons between the three conditions (T-NS, T-JS, T-RS) were conducted using the Nemenyi test when the Friedman test yielded a significant result for a given channel. The p-values from these Nemenyi comparisons were then adjusted using the Bonferroni correction method to account for the three comparisons performed per channel. This approach ensured that the overall Type I error rate for the set of post-hoc tests within each channel was maintained at α < 0.05.

The experimental data were meticulously analyzed, as presented in Table 2. Statistically significant differences were observed among T-NS, T-JS, and T-RS on specific channels, including Ch5 (p = 0.007 < 0.01), Ch6 (p = 0.049 < 0.05), Ch7 (p = 0.030 < 0.05), Ch18 (p = 0.027 < 0.05), Ch20 (p = 0.042 < 0.05), and Ch21 (p = 0.017 < 0.05). The specific magnitude of these differences was further analyzed by comparing the medians of the data across each channel.

Table 2

NameMedianStatistical χ2 valuepMedian differenceMedian differencep
NS Ch50.0589.9050.007**NS-JS0.0390.006**
JS Ch50.019NS-RS0.0020.644
RS Ch50.057JS-RS−0.0380.074
NS Ch60.0716.0480.049*NS-JS0.0410.074
JS Ch60.029NS-RS−0.0040.900
RS Ch60.075JS-RS−0.0460.095
NS Ch70.0397.0000.030*NS-JS0.0170.057
JS Ch70.022NS-RS−0.0150.900
RS Ch70.054JS-RS−0.0320.057
NS Ch180.0327.1900.027*NS-JS0.0610.032*
JS Ch18−0.029NS-RS0.0140.894
RS Ch180.017JS-RS−0.0470.095
NS Ch200.0416.3330.042*NS-JS−0.0010.043*
JS Ch200.042NS-RS−0.0270.831
RS Ch200.067JS-RS−0.0250.152
NS Ch210.0398.1900.017*NS-JS0.0270.013*
JS Ch210.013NS-RS0.0140.188
RS Ch210.025JS-RS−0.0120.519

Results of multi-sample Friedman analysis for true information.

Ch5 and Ch21 are non-ROI channels. Their results are for reference only and not included in the core cognitive interpretation. *p < 0.05, **p < 0.01.

On Channel Ch5 (ROI Assignment: None, BA8, involved in eye movement control), the difference between T-JS and T-RS was non-significant (p = 0.074 > 0.05). As this channel is irrelevant to the core cognitive processes of source authority assessment, this non-significant result further confirms no reliable distinction in HbO concentration between non-authoritative and authoritative fact-checking conditions for true information. On Channel Ch6 (DLPFC-ROI, BA9), the difference between T-NS and T-JS was non-significant (p = 0.074 > 0.05), which suggests non-authoritative fact-checking does not meaningfully modulate neural activity related to cognitive control (the core function of DLPFC-ROI) relative to the no-source condition. On Channel Ch7 (DLPFC-ROI, BA9), the differences between T-NS and T-JS (p = 0.057 > 0.05) and between T-JS and T-RS (p = 0.057 > 0.05) were both non-significant. Even though the uncorrected p-values were close to the 0.05 threshold, the lack of statistical significance confirms that variations in HbO concentration between conditions reflect random neural variability rather than systematic differences in cognitive processing induced by fact-checker authority. However, no significant difference between T-NS and T-RS was found. On both Channels Ch18 and Ch20, significant differences were found between T-NS and T-JS (p = 0.032 < 0.05 and p = 0.043 < 0.05, respectively), while no significant differences were detected with other comparison groups. On Channel Ch21, a significant difference was evident between T-NS and T-JS (p = 0.013 < 0.05), with no significant differences observed between other comparison groups.

Specifically, within the context of true information, the introduction of non-authoritative third-party fact-checkers was associated with a compelling pattern of neural activity modulation. In the absence of third-party fact-checkers (NS condition), a significant task-related HbO increase was observed, suggesting a pronounced activation state. However, labeling with non-authoritative third-party fact-checkers (JS condition) led to a marked decrease in HbO across multiple critical channels within the rostro-prefrontal cortex (RPFC). This statistically significant difference underscores the substantial disruptive effect that non-authoritative third-party fact-checkers exert on the brain’s processing mechanisms as participants engage with authentic information. This contributes to our understanding of the interplay between information verification and cognitive load.

Effects of third-party fact-checkers on the evaluation of false information

In this study, statistical analyses were conducted on three sets of data, namely F-NS, F-JS, and F-RS, across multiple channels (Ch19, Ch13, Ch17). Table 3 presents a detailed discussion of the analytical findings.

Table 3

NameMedianStatistical χ2 valuepMedian differenceMedian differencep
NSCh190.0398.3330.016*200–2010.0210.074
JSCh190.018200–2020.0430.018*
RSCh19−0.004201–2020.0220.831
NSCh130.0435.1900.075200–201−0.0050.900
JSCh130.049200–2020.0410.095
RSCh130.003201–2020.0460.152
NSCh170.0275.1900.075200–2010.0250.152
JSCh170.003200–2020.0270.095
RSCh170.001201–2020.0020.900

Results of a multi-sample Friedman analysis for false information.

Ch13 is non-ROI channels. The result is for reference only and not included in the core cognitive interpretation. *p < 0.05.

Comparisons among F-NS, F-JS, and F-RS on Channel Ch19 revealed significant differences. Specifically, the comparison between F-NS and F-RS on Ch19 exhibited a significant difference (p = 0.016 < 0.05). Additionally, on Channel Ch19 (DLPFC-ROI, BA46, core region for cognitive conflict resolution), the comparison between F-NS and F-JS was non-significant (p = 0.074 > 0.05). This result indicates no reliable difference in HbO concentration between the no-source and non-authoritative fact-checking conditions for false information, suggesting non-authoritative sources do not modulate cognitive conflict resolution processes in the DLPFC-ROI. In contrast, the comparison between F-JS and F-RS on Ch19 demonstrated no significance. Similar non-significant results were observed on Channels Ch13 and Ch17. On Channel Ch13 (ROI Assignment: None, BA45, involved in language processing), comparisons among F-NS, F-JS, and F-RS were non-significant (p = 0.075 > 0.05); as this channel is unrelated to source authority assessment, no further interpretation was conducted. On Channel Ch17 (FP-ROI, BA10), comparisons among F-NS, F-JS, and F-RS were also non-significant (p = 0.075 > 0.05). The p-value < 0.1 does not meet the conventional statistical significance threshold (α = 0.05) for cognitive neuroscience research, so this result does not support follow-up claims about false information processing.

For pieces of false information only (NS condition), a significant task-related HbO increase was seen, indicative of a pronounced activation state. However, when authoritative third-party fact-checkers were integrated into the false information (RS condition), a marked decrease in HbO was observed within the RPFC on specific channels, such as Ch19. This finding further substantiates the influence of fact-checking authority on neural processing during the evaluation of false information.

Hemodynamic activation

To investigate the specific impacts of third-party fact-checking on individual cognitive processes, this study further compared cortical activation patterns before and after the incorporation of fact-checking across different types of information. The present study unveiled the pronounced influence of third-party fact-checkers on cerebral hemodynamics, particularly the concentration of HbO. Figure 3 illustrates the variations in HbO concentration following the addition of distinct third-party fact-checkers to different information types.

Figure 3

Specifically, within the context of true information, the introduction of non-authoritative third-party fact-checkers elicited a compelling neural activity pattern. In the absence of any additional fact-checking (NS condition), the RPFC, which is a crucial brain region responsible for cognitive control, decision-making, and information evaluation (Miller and Cohen, 2001), exhibited a significant activation state. This activation is inferred to reflect the brain’s active engagement in cognitive conflict monitoring and deliberative assessment of information veracity, such as detecting inconsistencies between the information and prior knowledge frameworks (McClellan et al., 2026). However, upon transitioning to the scenario with non-authoritative third-party fact-checkers (JS condition), a statistically significant decrease in RPFC activation intensity was observed across multiple key channels (including Ch5, Ch6, Ch7, Ch18, Ch20, and Ch21). This finding suggests that non-authoritative fact-checking information may interfere with the brain’s default evaluation of authentic information. Combined with the behavioral data showing only a marginal and non-significant increase in accuracy, this reduction in neural activity likely does not reflect an enhancement of a “deliberative assessment mechanism” (Moravec et al., 2019). Instead, the results more plausibly indicate that participants reduced their cognitive investment in the information or adopted a heuristic processing strategy based on the perceived low credibility of the source, engaging in rapid “cognitive offloading.” Consequently, this shift in processing strategy did not yield significant gains in behavioral accuracy.

Similarly, the impact of third-party fact-checkers on RPFC activation patterns was determined for pieces of false information. In the presence of false information only (NS condition), RPFC activation persisted, albeit potentially reflecting the brain’s initial skepticism or assessment process toward information veracity, as compared to activation by authentic information. Specifically, the detection of potential discrepancies between the false information and existing knowledge structures, which triggers cognitive conflict signals that prompt further evaluation (McClellan et al., 2026). Notably, when authoritative third-party fact-checkers were integrated into false information (RS condition), a marked reduction in RPFC activation was observed on specific channels, such as Ch19, strongly suggesting that authoritative fact-checking may serve as a “fast track” in misleading contexts (Kahneman, 2012), This “fast track” effect manifests as participants relying on the high credibility of the source to resolve initial cognitive conflict (Moore et al., 2021), thereby engaging in less intensive cognitive deliberation and assessment of the information itself, which accelerates decision-making but may reduce the depth of verification regarding information actual veracity.

In-depth analysis of the experimental results revealed that for authentic information, the integration of non-authoritative fact-checking significantly influenced participants’ cognitive processing, primarily by weakening the engagement of RPFC-mediated deliberative evaluation, leading to reduced cognitive conflict monitoring. In contrast, for false information, the addition of authoritative fact-checking emerged as a pivotal factor modulating the cognitive processes, suppressing the RPFC’s activation related to cognitive conflict detection, thereby shortening the evaluation process. Notably, behavioral data show that non-authoritative fact-checkers achieved higher accuracy (59%) than authoritative ones (55%) in judging false information. This finding contrasts with the neural data indicating an “authoritative source effect,” implying that authority does not always confer cognitive benefits. This may be attributed to non-authoritative sources, due to their lower perceived credibility, which fail to provide a reliable “fast track” for resolving cognitive conflict (Feng et al., 2023). Instead, they prompt participants to maintain higher vigilance or engage in more critical thinking, sustaining RPFC activation for longer periods to monitor and resolve potential discrepancies, thereby slightly enhancing the discrimination of false information at the behavioral level. This contrasting effect may be explained by cognitive dissonance theory (Festinger, 1962), wherein individuals confronted with incongruent or differentially credible external cues experience cognitive conflict and adjustment (McClellan et al., 2026). This alters their information processing strategies, which are closely linked to cognitive control and metacognitive monitoring (Moravec et al., 2019). Our findings are consistent with a recent study investigating cognitive conflict in processing misinformation (McClellan et al., 2026).

Furthermore, this study reinforces the intricate association between the brain’s cognitive control regions and information processing. Specifically, the left PFC exhibited significantly enhanced activation upon the introduction of authoritative fact-checking sources such as People’s Daily. The left PFC is activated in conflict monitoring, while conflict resolution engages bilateral prefrontal regions (Sun et al., 2013). This could be interpreted as authoritative information providing a clear and credible external cue that facilitates the resolution of initial cognitive conflict triggered by information evaluation (Moore et al., 2021). This effectively reduces the uncertainty in judgment and promotes more efficient integration of the information into existing knowledge frameworks, thereby enhancing the acceptance of accurate information. Conversely, the right PFC, typically associated with conflict monitoring and uncertainty assessment (Moravec et al., 2019; Sun et al., 2013), demonstrated specific patterns reflecting the cognitive challenges and sustained evaluation elicited by non-authoritative information. Reduced RPFC activation under non-authoritative fact-checking for true information reflects diminished conflict monitoring, while its relatively sustained activation under non-authoritative fact-checking for false information supports prolonged evaluation. Notably, the functional roles of these prefrontal subregions are closely tied to cognitive control processes. The LPFC is critically involved in cognitive control and working memory; in contrast, the RPFC plays a key role in conflict monitoring and uncertainty assessment, supporting the detection of discrepancies between information and prior knowledge. The observed activation patterns thus reflect a complex integration of source credibility appraisal and higher-order cognitive processing during information evaluation.

Changes in HbO concentrations

Figure 4 compares the changes in HbO within the specific PFC region over time (−5 to 10s), before and after the incorporation of third-party fact-checkers. The experimental data demonstrated the disparities between the two groups.

Figure 4

In the control condition for false information presented without third-party fact-checkers (NS condition, left panel), a rising trend in HbO concentration was observed, indicating increased activation in this brain region, likely reflecting engagement in conflict monitoring or elevated cognitive effect. This activation is likely engaged in monitoring information conflict or exerting cognitive effort to resolve uncertainties about the information’s veracity (Kawiak et al., 2020; McClellan et al., 2026). Conversely, in the experimental group where authoritative third-party fact-checkers were introduced (RS condition), a notable decline in HbO concentration was observed. This pronounced alteration suggests that the presence of authoritative fact-checking significantly impacts participants’ cognitive processes. Specifically, the high credibility of the source provides a strong external signal that resolves the initial cognitive conflict triggered by the false information (Feng et al., 2023), reducing the internal cognitive load for conflict resolution and uncertainty assessment. This reduction in cognitive demand manifests as attenuated neural activation in the PFC, which supports the hypothesis that authoritative sources can serve as a “fast track” in misleading contexts (Kahneman, 2012)—shortening the time required to resolve cognitive conflict but potentially limiting in-depth evaluation of the information itself.

Similar results were observed regarding true information, wherein the HbO in the specific RPFC region exhibits an upward trend, potentially reflecting neural activity related to baseline assessment of information authenticity or cognitive preparation, including the initial detection of consistency (or lack thereof) between the true information and prior knowledge, which may trigger low-level cognitive conflict that prompts further verification (McClellan et al., 2026). However, labeling authentic information with non-authoritative third-party fact-checkers altered the RPFC activation pattern. The trend in HbO concentration shows a downward trend, indicating reduced activation. This shift suggests that the introduction of a non-authoritative source may trigger a process of rapid trust calibration or source reliability evaluation. Participants may judge the non-authoritative source as less reliable, leading them to reduce their reliance on the source’s cues and, consequently, diminish the engagement of RPFC-mediated conflict monitoring and deliberative evaluation. This shift in neural pattern, consistent with the lack of significant improvement in behavioral accuracy, jointly points to the possibility that non-authoritative fact-checking may prompt a more efficiency-oriented, source-feature-based shallow processing strategy. The results showed that people reduce cognitive conflict-related activation without enhancing the accuracy of information judgment.

Discussion

This study delves into the neural mechanisms underlying the impact of the credibility of third-party fact-checkers on individuals’ cognitive processing, utilizing fNIRS. These findings offer a novel perspective and empirical evidence for understanding information recipients’ decision-making behaviors and enhancing their information discrimination abilities. The experimental results not only confirm the pivotal role of authoritative third-party fact-checkers in true information processing but also reveal a unique phenomenon where both authoritative and non-authoritative fact-checking significantly enhance judgment accuracy in the context of false information.

Firstly, in the processing of true information, the introduction of authoritative third-party fact-checkers (e.g., People’s Daily) markedly elevates participants’ judgment accuracy, yielding an increase from a baseline of 37 to 62%. This outcome reflects previous research, further emphasizing the irreplaceable nature of authoritative fact-checking in shaping users’ cognitive frameworks and information processing strategies (Xu et al., 2023). The fNIRS data provide neural-level evidence, indicating that the incorporation of authoritative fact-checking activates the left prefrontal cortex (left PFC), a neural substrate for cognitive conflict resolution and working memory integration (McClellan et al., 2026; Sun et al., 2013). This significant enhancement may suggest that authoritative information provides a reliable external cue that effectively resolves initial cognitive conflict triggered by information evaluation, thereby facilitating efficient integration of the information into existing knowledge structures and promoting accurate judgment. This process encompasses not only psychological mechanisms of trust transfer but also the influence of cognitive biases, which reduces the cognitive load required for verification and enables users to readily accept and rely on accurate information (Feng et al., 2023; Moore et al., 2021). Additionally, a notable change in the activation pattern of the right PFC (Miller and Cohen, 2001) was observed when participants were presented with true information accompanied by non-authoritative fact-checking. Specifically, exposure to true information only elicited the activation of the RPFC, possibly reflecting the brain’s initial skepticism or evaluation of information authenticity. However, when true information was presented with non-authoritative fact-checking, the activation pattern of the right PFC underwent a shift: activation intensity decreased significantly. Combined with the behavioral data showing only a marginal and non-significant increase in accuracy, this alteration in neural activity likely does not reflect an enhancement of a “deliberative assessment mechanism.” Instead, it more plausibly indicates that participants reduced their cognitive investment in the information or adopted a heuristic processing strategy based on the perceived low credibility of the source (Moravec et al., 2019), engaging in rapid “cognitive offloading” that diminishes the engagement of RPFC-mediated cognitive conflict monitoring and detailed evaluation.

Contrasting with true information processing, this study uncovers an intriguing phenomenon in the handling of false information. Both authoritative and non-authoritative third-party fact-checkers significantly enhanced participants’ accuracy in judging false information, leading to an increase from 36 to 55 and 59%, respectively. Notably, behavioral data showed that non-authoritative fact-checkers achieved higher accuracy (59%) than authoritative ones (55%) in judging false information. This finding contrasts with the neural data indicating an “authoritative source effect,” implying that authority does not always confer cognitive benefits in resolving cognitive conflict (Metzger et al., 2020). This may be attributed to non-authoritative sources, due to their lower perceived credibility, which triggered a “fast track” for resolving cognitive conflict (Moore et al., 2021). Instead, authoritative sources prompt participants to maintain higher vigilance or engage in more critical thinking, sustaining RPFC activation to monitor discrepancies between the false information and prior knowledge (McClellan et al., 2026), thereby enhancing the detection of inaccuracies. These results suggest that authoritative fact-checking often accelerates cognitive conflict resolution in the dissemination of false information, while non-authoritative fact-checking prolongs conflict monitoring to ensure more thorough verification. Analysis of fNIRS data revealed that exposure to false information elicited significantly different activation patterns of the right PFC when labeled with different third-party fact-checkers. Specifically, exposure to false information only was found to consistently activate the RPFC, possibly reflecting the brain’s initial skepticism or evaluation of information authenticity, which triggers cognitive conflict signals that prompt further assessment (Festinger, 1962). However, integration of false information with authoritative fact-checking resulted in a significant decrease in the activation state of the RPFC in specific channels, strongly suggesting that authoritative fact-checking may serve as a “fast track” in misleading contexts (Kahneman, 2012). This “fast track” effect manifests as participants using the source’s high credibility to quickly resolve initial cognitive conflict (Feng et al., 2023), thereby engaging in less intensive cognitive deliberation and assessment of the information itself, which accelerates decision-making but may reduce the depth of conflict monitoring (Moravec et al., 2019; Kahneman, 2012).

This contrasting pattern can be explained by cognitive dissonance theory (Metzger et al., 2020), whereby incongruent or differentially credible external cues experience cognitive conflict and adjustment, thereby dynamically altering their information processing strategies (McClellan et al., 2026). Specifically, authoritative sources reduce cognitive dissonance by providing reliable cues (Moore et al., 2021), while non-authoritative sources either fail to resolve dissonance (for false information) or diminish the need to address it (for true information). Furthermore, the accuracy values indicated that in the absence of third-party fact-checkers, the accuracy rates for both true and false information were 37 and 36%. These results are consistent with those of Moravec et al., stating that dominance of confirmation bias substantially impairs users’ ability to discriminate true and false information (Moravec et al., 2019). This impairment likely stems from weakened cognitive conflict detection: individuals prioritize information that aligns with prior beliefs, reducing the activation of PFC regions responsible for monitoring discrepancies (McClellan et al., 2026). This leads to unreliable assessments regarding the veracity of the information presented.

Moreover, this study further validates the intricate connection between the brain’s cognitive control regions and information processing. The left PFC showed significantly enhanced activation under authoritative fact-checking conditions, reflecting the facilitative role in cognitive conflict resolution and working memory integration (Miller and Cohen, 2001). Conversely, the decreased activation of the right PFC under exposure to non-authoritative fact-checking for true information may stem from its involvement in conflict monitoring and uncertainty assessment (Moravec et al., 2019). Sources perceived as unreliable result in reduced investment in conflict detection, leading to diminished activation. This discovery highlights the complex interplay between cognitive conflict modulation and deliberative evaluation during information processing.

Consequently, the following recommendations are proposed. In information dissemination, the authority and transparency of third-party fact-checkers should be prioritized to provide reliable cues for cognitive conflict resolution (Feng et al., 2023), ensuring information reliability while avoiding blind reliance on authoritative sources. Simultaneously, users should actively screen and verify information, fostering critical thinking skills that enhance cognitive conflict detection. This implies sustained engagement of PFC regions to monitor discrepancies between information and prior knowledge (McClellan et al., 2026). Furthermore, cognitive conflict mechanisms should be leveraged to optimize information dissemination strategies, such as designing interventions that prompt appropriate levels of conflict monitoring. This approach represents a crucial direction for future information communication research.

In conclusion, the present study utilized fNIRS to unveil the profound influence of the credibility of third-party fact-checkers on the cognitive processing mechanisms. These findings provide a pivotal theoretical and practical foundation for optimizing information dissemination strategies and enhancing individuals’ information discrimination capabilities. Future research endeavors should delve deeper into the interplay of diverse information types, individual characteristics, and contextual factors in shaping the cognitive processing dynamics, thereby developing a more comprehensive and nuanced model of information processing. Additionally, an in-depth exploration of the role of cognitive conflict in the dissemination of information would facilitate a more effective harnessing of this mechanism. This would increase public trust in information and enhance discrimination abilities by promoting appropriate conflict monitoring and resolution, ultimately contributing to the establishment of a healthy and orderly information ecosystem.

Conclusion

This study operationalized the credibility of third-party fact-checkers based on institutional authority, derived from an organization’s official status and public credibility. This study selected People’s Daily (a high-authority, central-level official outlet) and Jiupai News (a lower-authority, local-market platform) as contrasting sources, a choice anchored in established credibility research documenting a “stronger central, weaker local” media landscape. The validity of this operational definition is further substantiated by the behavioral results of this study. A chi-square test analysis of judgment accuracy for true information revealed a significant difference in the distribution of accuracy across the different source conditions (no source, Jiupai News, People’s Daily; χ2 (2, N = 264) = 11.092, p = 0.004). Specifically, when information was tagged with People’s Daily, participants’ accuracy in judging it as “true” was highest (62%). When information was tagged with Jiupai News, judgment accuracy was intermediate (45%). In the absence of any source labeling, judgment accuracy was lowest (37%). The judgment accuracy of information labeled with People’s Daily was higher than that of Jiupai News. This finding supports the validity of the manipulation of “institutional authority” variable, confirming differences in cognitive processing.

This study focuses on the profound influence of the credibility of third-party fact-checkers on cognitive processing. Functional near-infrared spectroscopy (fNIRS) was performed to investigate the differences in neural activity when processing information verified by authoritative versus non-authoritative third-party fact-checkers. To examine whether experimental repetitions induced practice or fatigue effects that could confound the core findings, a judgment stability analysis was conducted for all 16 information stimuli. For each piece of information, Cochran’s Q tests were performed to assess whether the distribution of participants’ judgments (true/false) remained consistent across the five repeated presentations. The statistical analysis results indicated that none of the tests for the 16 pieces of information reached statistical significance (all p > 0.05). This demonstrates that participants’ judgments for each specific piece of information remained stable across the five repetitions, with no systematic change. Consequently, the observed between-condition differences in behavioral and neural patterns in this experiment can be robustly attributed to the experimental manipulation (i.e., source authority), rather than to temporal effects arising from repeated presentation.

The findings indicate that authoritative third-party fact-checkers exhibit significant advantages in enhancing the evaluation accuracy of true information; this is evidenced by increased activation in the left prefrontal cortex, reflecting the pivotal roles of trust transfer and cognitive conflict resolution in information processing. Regarding true information, the introduction of non-authoritative fact-checking was associated with a decrease in RPFC activation. This reduction, coupled with only a marginal improvement in behavioral accuracy, suggests a shift toward heuristic processing or cognitive offloading rather than enhanced deliberative assessment. When confronted with false information, authoritative third-party fact-checkers serve as a “fast track,” urging participants to reduce in-depth contemplation and evaluation of the information, thereby improving judgment accuracy to a certain extent but potentially overlooking the need for thorough verification of the information’s authenticity. Notably, behavioral data revealed that judgment accuracy for false information was slightly higher when accompanied by non-authoritative fact-checkers (59%) compared to authoritative ones (55%). This conflicts with the neural “fast track,” suggesting that non-authoritative sources may, in some contexts, inadvertently promote more vigilant assessment. Consequently, RPFC activation is sustained, indicating active cognitive conflict monitoring, which enhances the detection of false information. Meanwhile, non-authoritative third-party fact-checking also demonstrates the potential to promote cautious assessment, challenging conventional cognitions about source authority and emphasizing the complexity and multifaceted nature of information processing. This pattern of results, where source authority differentially modulates processing based on information veracity, can be interpreted through the lens of cognitive dissonance theory (Chandra and Choudhury, 2023), as individuals adjust their processing strategies in response to incongruent cues.

This research not only deepens our understanding of the relationship between information credibility and cognitive processing but also provides crucial empirical evidence for optimizing information dissemination strategies. In the information era, the authority of third-party fact-checkers can be utilized to enhance public information discrimination capabilities, promoting careful consideration of information content authenticity, individual cognitive characteristics, and contextual factors. While authority can enhance efficiency and trust in some scenarios, it may also shorten active content evaluation, whereas non-authoritative sources might, under specific conditions, elicit additional scrutiny. Furthermore, the application of fNIRS technology opens up new avenues for cognitive science research, demonstrating its unique value in uncovering the intricacies of the brain’s cognitive processes.

In summary, this study adopts an interdisciplinary methodology integrating perspectives from psychology, cognitive science, and neuroscience, and contributes novel insights into understanding information receivers’ psychological mechanisms. The findings facilitate the optimization of the information dissemination environment and promote the construction of a healthy information ecosystem. Future research should focus on exploring the impacts of different information types, individual differences, and socio-cultural environments on cognitive processing.

Statements

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.

Ethics statement

The studies involving humans were approved by the Ethics Committee of Institute of Neuroscience and Cognitive Psychology of Anhui Polytechnic University. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

SC: Conceptualization, Visualization, Writing – original draft, Software. XY: Conceptualization, Data curation, Investigation, Methodology, Software, Visualization, Writing – original draft. YD: Funding acquisition, Investigation, Resources, Supervision, Validation, Writing – review & editing.

Funding

The author(s) declared that financial support was received for this work and/or its publication. This work is supported by the National Social Science Fund of China (24BGL148), Anhui Provincial Natural Science Foundation (No. 2308085MG228), the Humanities and Social Science Fund of Ministry of Education of China (No.23YJC630032), and the Scientific Research Project of Anhui Universities (No. 2022AH010060, 2023AH030023).

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that Generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1

    BhattacherjeeA. (2023). Can fact-checking influence user beliefs about misinformation claims: an examination of contingent effects. MIS Q.47, 16791692. doi: 10.25300/misq/2023/17688

  • 2

    BolsenT.DruckmanJ. N. (2015). Counteracting the politicization of science. J. Commun.65, 745769. doi: 10.1111/jcom.12171

  • 3

    BondiellA.MarcelloniF. (2019). A survey on fake news and rumour detection techniques. Inf. Sci.497, 3855. doi: 10.1016/j.ins.2019.05.035

  • 4

    BordiaP.DiFonzoN.HainesR.ChaselingE. (2005). Rumors denials as persuasive messages: effects of personal relevance, source, and message characteristics. J. Appl. Soc. Psychol.35, 13011331. doi: 10.1111/j.1559-1816.2005.tb02172.x

  • 5

    ChanceB.AndayE.NiokaS.ZhouS.HongL.WordenK.et al. (1998). A novel method for fast imaging of brain function, non-invasively, with light. Opt. Express2, 411423. doi: 10.1364/oe.2.000411,

  • 6

    ChandraS.ChoudhuryA. (2023). “Advancements in measuring cognition using EEG and fNIRS: a survey” in Handbook of metrology and applications (Singapore: Springer Nature Singapore), 18791917.

  • 7

    DingY.YangX.ZhangW.LyuW.WangM. Y. (2024). Unveiling the authenticity evaluation and neural response to online health rumors: an ERPs study. Sci. Rep.14:31274. doi: 10.1038/s41598-024-82696-x

  • 8

    DüzelE.YonelinasA. P.MangunG. R.HeinzeH. J.TulvingE. (1997). Event-related brain potential correlates of two states of conscious awareness in memory. Proc. Natl. Acad. Sci. USA94, 59735978. doi: 10.1073/pnas.94.11.5973,

  • 9

    EspositoR.BortolettoM.MiniussiC. (2020). Integrating TMS, EEG, and MRI as an approach for studying brain connectivity. Neuroscientist26, 471486. doi: 10.1177/1073858420916452,

  • 10

    FaulF.ErdfelderE.LangA. G.BuchnerA. (2007). G*power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods39, 175191. doi: 10.3758/bf03193146,

  • 11

    FengK. J. K.RitchieN.BlumenthalP.ParsonsA.ZhangA. X. (2023). Examining the impact of provenance-enabled media on trust and accuracy perceptions. Proc. ACM Hum. Comput. Interact.7, 142. doi: 10.48550/arXiv.2303.12118

  • 12

    FestingerL. (1962). A theory of cognitive dissonance. Stanford: Stanford University Press.

  • 13

    FrankeT. M.HoT.ChristieC. A. (2012). The chi-square test: often used and more often misinterpreted. Am. J. Eval.33, 448458. doi: 10.1177/1098214011426594

  • 14

    GordonA.QuadfliegS.BrooksJ. C. W.EckerU. K. H.LewandowskyS. (2019). Keeping track of 'alternative facts': the neural correlates of processing misinformation corrections. NeuroImage193, 4656. doi: 10.1016/j.neuroimage.2019.03.014,

  • 15

    HeroldF.WiegelP.ScholkmannF.MüllerN. G. (2018). Applications of functional near-infrared spectroscopy (fNIRS) neuroimaging in exercise–cognition science: a systematic, methodology-focused review. J. Clin. Med.7:466. doi: 10.3390/jcm7120466,

  • 16

    HuppertT. J.DiamondS. G.FranceschiniM. A.BoasD. A. (2009). HomER: a review of time-series analysis methods for near-infrared spectroscopy of the brain. Appl. Opt.48, 280298. doi: 10.1364/ao.48.00d280

  • 17

    KahnemanD. (2012). Thinking, fast and slow. New York: Farrar, Straus and Giroux.

  • 18

    KaiserV.BauernfeindG.KreilingerA.KaufmannT.KüblerA.NeuperC.et al. (2014). Cortical effects of user training in a motor imagery based brain–computer interface measured by fNIRS and EEG. NeuroImage85, 432444. doi: 10.1016/j.neuroimage.2013.04.097

  • 19

    KaňkováJ.MatthesJ. (2026). Think twice, scroll once: encouraging critical reflection as a shield against health misinformation and overgeneralized messaging by social media influencers. Comput. Hum. Behav.177:108896. doi: 10.1016/j.chb.2025.108896

  • 20

    KawiakA.WojcikG. M.SchneiderP.KwasniewiczL.WierzbickiA. (2020). Whom to believe? Understanding and modeling brain activity in source credibility evaluation. Front. Neuroinform.14:607853. doi: 10.3389/fninf.2020.607853,

  • 21

    LeffDKohP HAggarwalRLeongJDeligianniFElwellCet alOptical mapping of the frontal cortex during a surgical knot-tying task, a feasibility study. Medical imaging and augmented reality: Third international workshop, Shanghai, China, august 17–18, 2006 proceedings 3. Berlin; Heidelberg: Springer, 2006: 140147.

  • 22

    LiH.SakamotoY. (2014). Social impacts in social media: an examination of perceived truthfulness and sharing of information. Comput. Hum. Behav.41, 278287. doi: 10.1016/j.chb.2014.08.009

  • 23

    LiR.YangD.FangF.HongK.-S.ReissA. L.ZhangY. (2022). Concurrent fNIRS and EEG for brain function investigation: a systematic, methodology-focused review. Sensors22:5865. doi: 10.3390/s22155865,

  • 24

    MalonekD.GrinvaldA. (1996). Interactions between electrical activity and cortical microcirculation revealed by imaging spectroscopy: implications for functional brain mapping. Science272, 551554. doi: 10.1126/science.272.5261.551,

  • 25

    MarquesJ. F.CanessaN.CappaS. (2009). Neural differences in the processing of true and false sentences: insights into the nature of ‘truth’ in language comprehension. Cortex45, 759768. doi: 10.1016/j.cortex.2008.07.004,

  • 26

    McClellanM. C.KirwanB.AshbyS. (2026). Neural mechanisms of cognitive conflict: processing COVID-19 vaccine misinformation. Front. Neurosci.19:1661523. doi: 10.3389/fnins.2025.1661523

  • 27

    MeekJ. (2002). Basic principles of optical imaging and application to the study of infant development. Dev. Sci.5, 371380. doi: 10.1111/1467-7687.00376

  • 28

    MetzgerM. J.HartsellE. H.FlanaginA. J. (2020). Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Commun. Res.47, 328. doi: 10.1177/0093650215613136

  • 29

    MillerE. K.CohenJ. D. (2001). An integrative theory of prefrontal cortex function. Annu. Rev. Neurosci.24, 167202. doi: 10.1146/annurev.neuro.24.1.167

  • 30

    MooreA.HongS.CramL. (2021). Trust in information, political identity and the brain: an interdisciplinary fMRI study. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci.376:20200140. doi: 10.1098/rstb.2020.0140,

  • 31

    MoranR. E.NechushtaiE. (2022). Before reception: trust in the news as infrastructure. Journalism24, 456474. doi: 10.1177/14648849211048961

  • 32

    MoravecP. L.MinasR. K.DennisA. R. (2019). Fake news on social media: people believe what they want to believe when it makes no sense at all. MIS Q.43, 13431360. doi: 10.25300/MISQ/2019/15505

  • 33

    NickersonR. S. (1998). Confirmation bias: a ubiquitous phenomenon in many guises. Rev. Gen. Psychol.2, 175220. doi: 10.1037/1089-2680.2.2.175

  • 34

    PalA.ChuaA.GohD. (2019). Debunking rumors on social media: the use of denials. Comput. Hum. Behav.96, 110122. doi: 10.1016/j.chb.2019.02.022

  • 35

    PanagiotaropoulosT. I. (2024). An integrative view of the role of prefrontal cortex in consciousness. Neuron112, 16261641. doi: 10.1016/j.neuron.2024.04.028,

  • 36

    PennycookG.BearA.CollinsE. T.RandD. G. (2020). The implied truth effect: attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manag. Sci.66, 49444957. doi: 10.1287/mnsc.2019.3478

  • 37

    PrikeT.ButlerL. H.EckerU. K. H. (2024). Source-credibility information and social norms improve truth discernment and reduce engagement with misinformation online. Sci. Rep.14:6900. doi: 10.1038/s41598-024-57560-7,

  • 38

    QiangY.HuY. (2025). An empirical assessment of the credibility of China’s new mainstream media. J. Commun. Rev.78, 6176. doi: 10.14086/j.cnki.xwycbpl.2025.03.006

  • 39

    QuaresimaV.FerrariM. (2019). Functional near-infrared spectroscopy (fNIRS) for assessing cerebral cortex function during human behavior in natural/social situations: a concise review. Organ. Res. Methods22, 4668. doi: 10.1177/1094428116658959

  • 40

    SunJ.SunB.ZhangL.LuoQ.GongH. (2013). Correlation between hemodynamic and electrophysiological signals dissociates neural correlates of conflict detection and resolution in a Stroop task: a simultaneous near-infrared spectroscopy and event-related potential study. J. Biomed. Opt.18:6014. doi: 10.1117/1.jbo.18.9.096014

  • 41

    TandocE. C.LingR.WestlundO.DuffyA.GohD.WeiL. Z. (2018). Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media & Society20, 27452763. doi: 10.1177/1461444817731756

  • 42

    WaightJ. L.AriasN.Jiménez-GarcíaA. M.MartiniM. (2024). From functional neuroimaging to neurostimulation: fNIRS devices as cognitive enhancers. Behav. Res. Methods56, 22272242. doi: 10.3758/s13428-023-02144-y,

  • 43

    XuY. X.ZhouD.WangW. (2023). Being my own gatekeeper, how I tell the fake and the real - fake news perception between typologies and sources. Inf. Process. Manag.60:103228. doi: 10.1016/j.ipm.2022.103228

  • 44

    YangJ.SuS.KangC. (2026). The paradox of authority and trust in AI-generated content: cognitive decision mechanisms based on the drift diffusion model. Chin. J. Appl. Psychol., 114. doi: 10.3785/CJAP.025111

  • 45

    ZhuL.LvJ. (2023). Review of studies on user research based on EEG and eye tracking. Appl. Sci.13:6502. doi: 10.3390/app13116502

  • 46

    ZinosA.WagnerJ. C.BeardsleyS. A.ChenW. L.ConantL.MalloyM.et al. (2024). Spatial correspondence of cortical activity measured with whole head fNIRS and fMRI: toward clinical use within subject. NeuroImage290:120569. doi: 10.1016/j.neuroimage.2024.120569,

Summary

Keywords

fNIRS, online rumors, source credibility, tagging warnings, third-party fact-checkers

Citation

Cheng S, Yang X and Ding Y (2026) Authority reliance vs. deliberative assessment in processing online rumors: evidence from fNIRS. Front. Hum. Neurosci. 20:1750499. doi: 10.3389/fnhum.2026.1750499

Received

20 November 2025

Revised

31 January 2026

Accepted

06 February 2026

Published

24 February 2026

Volume

20 - 2026

Edited by

Xiaosu Hu, University of Michigan, United States

Reviewed by

Francesco Di Nocera, Sapienza University of Rome, Italy

Qiwei Li, Chinese Academy of Sciences (CAS), China

Updates

Copyright

*Correspondence: Xinyue Yang, ; Yi Ding,

†These authors have contributed equally to this work

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics