Abstract
The rapid development of AI technology has triggered intense discussions on social media. As key users, online opinion leaders (OILs) wield āemotional powerā that exhibits an āemotion settingā effect, influencing usersā perceptions of AI, with their āexpertā identity playing a crucial role in emotional communication. To examine the impact of OILsā expert credibility and emotional arousal level on usersā AI perception, an experimental study (N = 102) was conducted. Results show that under a negative tone, higher-expert-credibility OILs led participants to perceive AI as more useful (PU) and easier to use (PEOU). Similarly, higher emotional arousal strengthened these perceptions. Notably, for high-credibility OILs, the arousal-credibility interaction significantly affected both PEOU and PU, whereas for low-credibility OILs, it impacted PU but not PEOU. Furthermore, AI anxiety mediates the arousal-perception relationship, moderated by expert credibility. Critically, emotional arousal significantly influenced AI anxiety regardless of credibility level. This study elucidates how OILs shape sociotechnical imaginaries amid rapid AI advancement.
Introduction
Artificial Intelligence (AI) technology has now infiltrated all aspects of peopleās lives, and its potential has not only captured the interest of the industry but also sparked widespread public imagination regarding its applications. However, the scholars soon recognized that this initial definition focused primarily on the national policy dimension, overlooking the various other ways that technologies can shape social life. Jasanoff (2015) defines sociotechnical imaginaries as ācollectively held, institutionally stabilized, and publicly performed visions of desirable futuresāāvisions animated by shared understandings of socially attainable life forms reinforced through technological advancement. Precisely within this conceptual framework, social media era experts and opinion leaders co-construct such shared understandings with users through dual pathways of knowledge negotiation and emotional resonance.
One of the most essential contributions scientists can make is helping the public understand when the truth (or, at least, expertsā best understanding of it) is counterintuitive (Houck et al., 2025). From the perspective of media content, Lupton notes that news reports can shape these imaginaries through positive framing (Lupton, 2017). Other research has focused on the communication process, highlighting that media articles and social media campaigns are equally critical contributors (Quinlan, 2021).
Experts often act as online opinion leaders, disseminating information about emerging technologies (including AI) to users via social media platforms. Opinion leaders can guide these imaginaries on social media platforms from a professional standpoint (Zhuang and Zhang, 2022). Within this context, emotions play a dual role in science communication: as a catalyst for engagement, they enhance the accessibility of complex concepts through resonance effects; yet as a potential liability, excessive emotional framing may compromise factual accuracy. This dichotomy is exemplified by AI fear-mongering narratives that obscure genuine technological progress while amplifying public anxiety.
To elicit more valid responses from participants, we designed the experiment to simulate real-life social media browsing scenarios, aiming to examine how online opinion leadersā expert credibility and messages conveying varying emotional arousal levels shape usersā perceptions of AI technology in social media contexts. Specifically, adopting the lens of science communication on social media, we integrate the framework of online opinion leadership to investigate the multifaceted identities of these digital influencers and their associated emotional engagement mechanisms. Additionally, recognizing that users also possess agency, the study includes an analysis of the role of AI anxiety.
This study makes significant contributions to the field of science communication and public perception of emerging technologies. Theoretically, it provides the first empirical evidence in social media contexts demonstrating how opinion leadersā influence operates through a dual-path mechanism: the interaction between expert credibility and emotional arousal in shaping AI perceptions, thereby addressing a critical gap in social media-based science communication research. Furthermore, by incorporating āAI anxietyā as a key variable, our work overturns the communicator-centered paradigm, empirically validating the mediating role of audiences in technology perception formation. This study also provides practical solutions for science communication: It gives communicators a clear method to balance emotional content and trustworthiness. It also helps create systems for demonstrating expertise credentials and supports educational programs that teach both facts and feelings about science. These changes let experts and the public work together to shape how people view AI, moving beyond just fixing misunderstandings to building shared understanding of technology.
The impact of opinion leaderās expert credibility in shaping usersā perception
When scientists communicate with the public, they take on one of several roles, which can complicate their task (FƤhnrich and Lüthje, 2017). And an āissue advocateā or āpublic intellectualā classifies events or focuses on research implications for a particular political agenda. Finally, scientists might take the position of āscience arbiterā or āhonest brokerā (Pielke, 2007). In communication research, according to the definition of āopinion leader,ā these scientists or a āpublic intellectualā can definitely be classified as such.
The identity and characteristics of opinion leaders have long attracted the attention of scholars, and various research methods have been used to identify opinion leaders (Rogers and Cartano, 1962). Returning to Lazarsfeldās original research, opinion leaders are closely related to the grasp of political information and are very familiar with this field. Some scholars define opinion leaders as āactually experts and/or social connectors who are active participants in online and offline communitiesā (Goldenberg et al., 2006). In terms of expert identity, credibility is an important basis for judgment. Credibility refers to the degree to which an individual is perceived to possess relevant expertise in a particular subject and can be relied upon to provide an objective assessment of the subject; accordingly, a credible source is identified as a communication medium known for its accurate information (Goldsmith et al., 2000; Visentin et al., 2019). In the case study of the COVID-19 vaccine, Chinese scholars also point out that professional online opinion leaders can amplify the publicās social and technological imagination (Zhuang and Zhang, 2022). Based on existing research, it is generally believed that the credibility of opinion leaders can enhance their persuasive effect.
Extant research has predominantly focused on how opinion leaders positively promote audience adoption, while largely neglecting whether negative content from opinion leaders may trigger audience rejection of related technologies or products. However, it is noteworthy that negative emotions typically exert stronger influence than positive ones (Baumeister et al., 2001), and a widely recognized negativity bias exists (Rozin and Royzman, 2001). To address this gap, the present study explores negative expression effects and proposes the following research hypothesis:
H1: Negative expressions by high-expert credibility online opinion leaders regarding AI technology exert an increasing effect on usersā perceptions of the technology compared to those from low-expert credibility online opinion leaders.
The role of emotional arousal
Emotion refers to peopleās attitude toward objective things and their corresponding behavioral reactions (Plutchik, 1984). According to Emotion As Social Information (EASI) theory, emotional expressions provide information to observers, which may influence their cognition, attitudes, and behavior (Van Kleef et al., 2011).
If science communication relies solely on factual information (the ādeficit modelā), it often fails to effectively reach the public (Taddicken and Reif, 2020). Emerging āpublic engagementā models demonstrate that emotional narratives (e.g., science slams, edutainment videos) can lower participation barriers, stimulate positive emotions, and thereby enhance the appeal of scientific topics (Niemann et al., 2020). Emotions serve not merely as communication tools but also as bridges connecting science with the public. For instance, āhope narrativesā in environmental issues are more effective at driving behavioral change than mere risk warnings (Lidskog et al., 2020). Negative emotions (e.g., fear, alienation) may widen the gap between the public and science, particularly as marginalized groups actively disengage due to āemotional distanceā (Humm et al., 2020).
The public seeks out and/or encounters information about science and technology from various sources, ranging from television, newspapers, and social media to interpersonal relationships (Besley and Hill, 2020; Pew Research Center, 2017). The role that emotion plays in social media communication has been recognized by scholars, and extensive research across multiple platforms has been conducted. Social media, as āemotion media,ā can amplify science communication through resonance, yet they may also fuel the spread of misinformation via emotional polarization (Taddicken and Wolff, 2020). A study on Twitter confirms that there is an emotion flow underneath the Twitter network, and the emotion of public opinion does have an influence on userās individual emotion (Naskar et al., 2020). A study on Weibo, Chinaās most popular social media platform, also demonstrates that online opinion leaders are not objective āinfomediaries,ā but influence users through emotional contagion (Bai and Xiao, 2011; Fu and Li, 2022).
For more in-depth research and analysis of emotion, PAD model has been widely adopted, which was first developed by Russell and Mehrabian (1977). Given the fact that human nervous system processes only two dimensions of emotionāarousal and valence during interactions (Gerber et al., 2008), and dominance has not been examined to the same extent as the other two factors, valence and arousal have been primarily focused on. On social media platforms, no matter in Chinese or in English, scholars find that valence and arousal do have a positive impact on user response, which is conducive to communications (Zhou and Ning, 2020; Feng, 2024). In this study, we mainly examine the role of emotional arousal in affective dimensions. Based on this, the second research hypothesis is proposed:
H2: When online opinion leaders express negative views on AI technology, higher levels of negative emotional arousal have a stronger impact on usersā perceptions of AI technology.
Meanwhile, studies find cues that emotional language can harm the trustworthiness of scientists as well as the credibility of their arguments (Kƶnig and Jucks, 2019). Conceptually, arousal degree should be distinct from expert credibility because arousal degree refers specifically to the emotional attributes of information and is independent of the identity of online opinion leaders. This research therefore seeks to examine which factorāthe expert credibility of online opinion leaders or the arousal degree of a social media contextāplays a more significant role in shaping usersā AI technology perception, while also investigating how these two factors jointly influence and interact with such perceptions, leading to the following hypothesis:
H3: The interaction between emotional arousal and expert credibility degree has a significant influence on userās perceptions of AI technology.
Usersā AI anxiety
Although this study is not a typical persuasion study, its model path is similar to that. Both of them start from media information and examine their impact on users. Since the path of communication is not smooth all the way, exploring the mediating role of various variables in it has become the focus of scholars. Generally speaking, these mediating variables mainly come from the source of information, the characteristics of the information text, or user itself.
From the perspective of the chain of emotional communication, this article have already involved the emotional characteristics of the information text, but userās own emotional related variables have not been included in the investigation. The receptivity to scientific communication to change attitudes depends on an individualās prior beliefs and commitment to them (Houck et al., 2025). The incongruence between media content and emotions in terms of valence produced greater media effects, the overall positive tone of mediated information about AI led to greater public support among people who harbor more negative emotion (anger) and less positive emotion (hope) (Choi et al., 2024). Reactions toward AI might be infused with discrete emotions, rather than subtler feelings, because narratives about AI have been around for a long time. Such narratives revolve around discrete emotions such as fear and hope (Cave et al., 2019).
In the history of technological development, people have long been afraid of or anxious about new technologies, and the term technophobia has appeared specifically, referring to the fear of the effects of technological developments on society or the environment. In the 1980s, the application of computers once triggered research on computer anxiety, referring to a disabling level of anxiety in individuals created by actual or even imagined interaction with computers or an internal dialogue that diminishes peopleās abilities and undermines their self-confidence (Rosen et al., 1987). Compared with computers, the changes brought about by AI technology are more revolutionary and bring more ethical challenges related to human and machine. Scholars believe that AI anxiety cannot be simply regarded as an extension of previous technological anxiety, and requires special research. Based on this, the main factors behind AI anxiety are considered (Li and Huang, 2020). From the technophobia to the current AI anxiety, these concepts all emphasize that usersā own anxiety or fear will affect their perception of technology. Therefore, this study proposes the following research hypothesis:
H4: userās AI anxiety will mediate the path between online opinion leadersā expressions about AI technology and userās perception of AI technology.
Usersā AI perception and technology acceptance model
As a dominant framework for operationalizing usersā perceptions of technology, TAMās core constructsāPerceived Usefulness (PU) and Perceived Ease of Use (PEOU)āprovide critical lenses for analyzing AI adoption. This model was proposed by Davis (1989), who applied rational behavior theory to study usersā acceptance of information systems. The original intention behind TAM was to examine key factors determining widespread computer adoptionāa research objective directly relevant to our study.
With technological evolution, TAM has been consistently applied to measure user perceptions across emerging technologies. Concurrently, studies indicate that emotional narratives may reconfigure perceived risks/benefits beyond PU/PEOU parameters (Hassanein and Head, 2007). Specifically, narrative frameworks emphasizing āemotional connectionā or āspiritual resonanceā in technology promotion can reshape usersā riskābenefit assessments, potentially transcending traditional usefulness and ease-of-use boundaries.
Therefore, this study employs the TAM to operationalize perceptions of AI, specifically measuring the source characteristics of AI-related content and its embedded emotional arousal level to examine their interactive effects on perception formation.
Materials and methods
Participants
This study recruited participants for a behavioral experiment in May 2024. The recruitment information was primarily released in universities, and a total of 102 students participated in the experiment. Among the recruited participants, 54.9% are female and 45.1% are male. The age range of the subjects is 18ā38 (MageāÆ=āÆ23.66, SDāÆ=āÆ3.455). The proportion of undergraduate students is 35.3%. The proportion of masterās and doctoral students is 50 and 12.7%, respectively. The experiment was approved by the Ethics Committee of our university.
Design and procedure
This within-subjects study employed a 2 (expert credibility: high/low)āÆĆāÆ2 (emotional arousal: high/low) factorial design with repeated measures across experimentally controlled stimuli varying in expert credibility and message emotional arousal. The experiment comprised 4 distinct conditions, each consisting of 6 stimulus messages.
At the experimentās onset, participants were seated at 24 designated computers in the group laboratory, where they viewed pre-generated forged Weibo screenshots presented in random order. After each stimulus screenshot, a series of questions was displayed. Participants responded based on the Weibo screenshot they viewed previously. The 30-min experimental procedure was designed and administered using E-Prime 3.0.
Measures
All measures during the formal experiment were assessed using a 7-point Likert scale, wherein higher scores indicate stronger agreement with the concept being measured.
Outcomes
As introduced earlier, we examined usersā perceptions of AI technology using TAM. As shown in Table 1, each scale comprises 4 items (Cheng et al., 2006). To measure usersā perceptions of artificial intelligence, we contextually adapted the original āInternet Banking (IB)ā scale items to āArtificial Intelligence (AI)ā applications, preserving the original Likert-scale structure.
Table 1
| Variable | Measurement |
|---|---|
| Perceived Ease of Use (PEOU) (Cronbachās alphaāÆ=āÆ0.934) | Using the IB service is easy for me. |
| I find my interaction with the IB services clear and understandable. | |
| It is easy for me to become skillful in the use of the IB services. | |
| Overall, I find the use of the IB services easy. | |
| Perceived Usefulness (PU) (Cronbachās alphaāÆ=āÆ0.929) | Using the IB would enable me to accomplish my tasks more quickly. |
| Using the IB would make it easier for me to carry out my tasks. | |
| I would find the IB useful. | |
| Overall, I would find using the IB to be advantageous. |
Measurement of AI perception.
Mediating variables: AI anxiety
The measurement utilized a 4-item scale originally developed by Li and Huang (2020) (MāÆ=āÆ2.47, SDāÆ=āÆ1.39, Cronbachās AlphaāÆ=āÆ0.92). Questions include:
Whether participants felt:
AI might harm humans in pursuit of a specific goal.
I am concerned that artificial intelligence could pose substantial risks to society at large.
I am apprehensive that artificial intelligence will attain a level of consciousness equivalent to humans.
I would feel uneasy due to my inability to discern whether AI possesses human consciousness or not.
Covariant: attitude towards AI technology
Participants were asked, āWhat is your attitude towards AI technology?ā using a 7-point Likert scale to assess their sentiments, where 1 represents the most negative stance and 7 the most positive. This question was measured before the experiment began.
Stimuli
Negative emotion and level of arousal
Building upon established findings on negativity bias (Rozin and Royzman, 2001), we designed experiments focusing on negative emotional responses to AI technology.
Firstly, a total of 200 messages were generated through ChatGPT 4.0 according to the following prompts:
āWrite a review with a negative attitude on the theme of artificial intelligence. Requisition: colloquial language, strong emotional expression, and a length of 60ā80 characters.ā
āWrite a review with a negative tone on the theme of artificial intelligence. Requisition: colloquial language, moderate emotional expression, and a length of 60ā80 characters.ā
āSince the specific usage scenarios of AI mostly revolve around AI Q&A, the following instruction were added: write a review with a negative tone on the theme of artificial intelligence, focusing on specific application scenarios other than AI Q&A. Requisition: colloquial language, strong emotional expression, and a length of 60ā80 characters.ā
āSince the generated content has highly similar sentence structures, further instruction was given as followed: express the above content in different sentence structures to vary the output.ā
Second, the AI-generated messages underwent Chinese translation and manual linguistic polishing for fluency. Thirty trained annotators then rated each messageās emotional arousal using the Self-Assessment Manikin (SAM) (Bradley and Lang, 1994) 9-point pictorial scale (1āÆ=āÆlow to 9āÆ=āÆhigh arousal). Messages were categorized as: high-arousal (mean score ā„ 5), low-arousal (mean score ⤠4), with extreme scorers (top/bottom 15%) selected as experimental stimuli (12 per group). Meanwhile, to ensure the rigor of the experiment and eliminate the influence of valence, the valence of the selected materials was also manually assessed. The average valence of both sets (high-arousal and low-arousal) of stimuli is below 4.5 (of 9-point Likert scale), thus both belong to the low-valence category.
Expert credibility of online opinion leaders
This study initially identified online opinion leaders on the Weibo platform based on the following criteria: having over 300,000 followers (with some exceeding one million) and possessing the platformās red āVā verification badge. Among them, some were authentic experts meeting the following qualifications: (1) professionals skilled in utilizing AI tools who regularly post recommendations about AI technology products; (2) executives from leading AI companies; and (3) offline scientists with established academic careers, while others were non-experts.
After selecting 22 online opinion leaders based on the aforementioned criteria, we created 22 simulated visual profiles replicating their identity information while concealing authentic personal details (including profile photos and original usernames). These simulated profiles underwent additional manual verification to reconfirm their expert credentials through human rating. Raters completed an online questionnaire displaying screenshots of 22 fabricated opinion leader profiles, rating each influencerās perceived expertise on a 5-point Likert scale. After calculating the average credibility score for each online opinion leader, we selected the 6 individuals with the lowest scores (ranging from 1.61 to 2.39) as low-expert credibility opinion leaders for the experimental control group. Conversely, the 6 individuals with the highest scores (ranging from 3.28 to 3.83) were designated as high-expert credibility opinion leaders.
For the formal experiment, we generated simulated microblog post screenshots incorporating the fabricated identity information from the aforementioned opinion leader profiles. These materials were systematically deployed as experimental stimuli to test userās perception mechanisms. Each simulated online opinion leader was assigned one high-arousal stimulus message and one low-arousal stimulus message.
Results
Manipulation checking
To assess the efficacy of our arousal level manipulation, we instructed participants to evaluate their arousal level subsequent to the presentation of each message during the experiment. The outcomes of an independent-samples t-test revealed that the participantsā scores for the materials in the high-arousal group were significantly higher compared to those in the low-arousal group (pāÆ=āÆ0, Mhigh-arousalāÆ=āÆ4.928, Mlow-arousalāÆ=āÆ3.940). This finding underscores the effectiveness of our manipulation.
Hypothesis testing
Hypothesis one predicted that reading messages from online opinion leaders with different degrees of credibility would strongly affect participantsā perception of technology. The results indicated that the main effect of online opinion leadersā credibility degree was significant for both PEOU (F(1, 77)āÆ=āÆ8.418, pāÆ<āÆ0.01, Ī·2pāÆ=āÆ0.099) and PU (F(1, 82)āÆ=āÆ25.751, pāÆ<āÆ0.001, Ī·2pāÆ=āÆ0.239).
For PEOU, participants had a higher perception under high-credibility conditions than under low-credibility conditions (Mhigh-credibilityāÆ=āÆ3.91, SDāÆ=āÆ1.16 versus Mlow-credibilityāÆ=āÆ3.83, SDāÆ=āÆ1.15; tāÆ=āÆ1.63, pāÆ>āÆ0.05). Although the difference was insignificant, it still suggested a trend towards users who received messages from high-credibility online opinion leaders had more positive PEOU.
Regarding PU, the differences between low-credibility conditions and high-credibility conditions were more pronounced. Participants in the high-credibility condition reported a higher PU than those in the low-credibility condition (Mhigh-credibilityāÆ=āÆ4.77, SDāÆ=āÆ1.11 versus Mlow-credibilityāÆ=āÆ4.58, SDāÆ=āÆ1.18; tāÆ=āÆ4.02, pāÆ<āÆ0.001). This difference indicated that user who received messages from high-credibility online opinion leaders found AI technology more useful, compared to when they read messages from low-credibility online opinion leaders. Overall, these findings suggested that the credibility degree of online opinion leaders may play a role in shaping participantsā perception levels, particularly in terms of PU. Therefore, H1 is supported.
Hypothesis two predicted that participants exposed to messages with high-arousal emotions would report higher levels of AI perceptions than those exposed to messages with low-arousal levels. To test this hypothesis, we employed the same method as for H1. As expected, the repeated-measures ANOVA results revealed that the main effects of arousal level were significant for both PEOU (F(1, 77)āÆ=āÆ21.433, pāÆ<āÆ0.001, Ī·2pāÆ=āÆ0.218) and PU (F(1,82)āÆ=āÆ44.249, pāÆ<āÆ0.001, Ī·2pāÆ=āÆ0.350). After reading high-arousal messages, participants reported higher PEOU socres than after reading low-arousal messages (Mhigh-arousalāÆ=āÆ3.95, SDāÆ=āÆ1.18 versus Mlow-arousalāÆ=āÆ3.79, SDāÆ=āÆ1.12; tāÆ=āÆ2.03, pāÆ<āÆ0.05). Similarly, for PU, the high-arousal emotion group exhibited significantly higher ratings than the low-arousal emotion group (Mhigh-arousalāÆ=āÆ4.83, SDāÆ=āÆ1.09 versus Mlow-arousalāÆ=āÆ4.52, SDāÆ=āÆ1.19; tāÆ=āÆ4.10, pāÆ<āÆ0.001). Thus, H2 is also supported.
Regarding hypothesis three, the interaction effects between arousal level and credibility degree were also found to be significant for PEOU (FāÆ=āÆ8.365, pāÆ<āÆ0.01, Ī·2pāÆ=āÆ0.098) and PU (FāÆ=āÆ5.114, pāÆ<āÆ0.05, Ī·2pāÆ=āÆ0.059). As illustrated in Figure 1 for PEOU, when facing high-credibility online opinion leaders condition, participants reported higher scores of PEOU when they read high-arousal messages compared to low-arousal messages (Mhigh-arousalāÆ=āÆ4.004, SDāÆ=āÆ1.069 vs. Mlow-arousalāÆ=āÆ3.811, SDāÆ=āÆ1.012; tāÆ=āÆā4.961, pāÆ<āÆ0.001). However, under the low-credibility condition, the difference in PEOU scores between high-arousal and low-arousal messages was not significant (Mhigh-arousalāÆ=āÆ3.880, SDāÆ=āÆ0.982 vs. Mlow-arousalāÆ=āÆ3.809, SDāÆ=āÆ1.012; tāÆ=āÆā1.829, pāÆ>āÆ0.05).
Figure 1
For PU (see Figure 2), the post-hoc analysis revealed that both high-credibility and low-credibility groups exhibited significant differences in performance when comparing low emotion arousal to high emotion arousal conditions. Participants with high-arousal level perceived AI as more useful compared to those with low-arousal in high-expert credibility group (Mhigh-arousalāÆ=āÆ4.951, SDāÆ=āÆ0.809 versus Mlow-arousalāÆ=āÆ4.636, SDāÆ=āÆ0.895; tāÆ=āÆā6.493, pāÆ<āÆ0.001). For the low-expert credibility group, the difference was also significant (Mhigh-arousalāÆ=āÆ4.725, SDāÆ=āÆ0.901 versus Mlow-arousalāÆ=āÆ4.507, SDāÆ=āÆ1.045; tāÆ=āÆā5.167, pāÆ<āÆ0.001). Therefore, H3 is partially supported.
Figure 2
The moderated mediation analysis
We conducted mediation and moderation effect analyses based on 5,000 bootstrap samples to test whether AI anxiety mediated observed differences in perception about AI technology, included AI anxiety as mediator, credibility as moderator and attitude as a covariate (see Figure 3), along with credibility dummy coded: low-credibilityāÆ=āÆ0, high-credibilityāÆ=āÆ1 (Hayes, 2013, PROCESS Model 8).
Figure 3
The moderated mediation models show significant results. With PEOU as outcome variable (see Table 2), the whole model was significant (F(5, 2409)āÆ=āÆ20.4377, pāÆ<āÆ0.001, R2āÆ=āÆ0.041). The interaction of arousal of messages and credibility of online opinion leaders had significantly positive effect on AI anxiety (BāÆ=āÆ0.334, tāÆ=āÆ2.9898, pāÆ<āÆ0.01). The simple slope analysis showed that (see Figure 4), for low-credibility online opinion leadersā group, arousal level of messages had a significantly positive effect on participantsā AI perception (simple slopeāÆ=āÆ0.7116, tāÆ=āÆ8.9010, pāÆ<āÆ0.001). For high-credibility group, the positive effect of arousal level on AI anxiety increased (simple slopeāÆ=āÆ1.0455, tāÆ=āÆ13.1372, pāÆ<āÆ0.001). Furthermore, the indirect effect of AI anxiety at different degrees of credibility was significant (indirect effectāÆ=āÆā0.0321; 95% confidence interval [CI]: [ā0.0586, ā0.0094]). However, we found that the effect of AI anxiety was suppressing effect (see Table 3).
Table 2
| Moderator | Outcome | Mediator | Outcome |
|---|---|---|---|
| (1) PEOU | (2) AI anxiety | (3) PEOU | |
| (Intercept) | 3.870*** (0.023) | 3.746*** (0.028) | 3.871*** (0.023) |
| Attitude | 0.162*** (0.023) | ā0.265*** (0.028) | 0.137*** (0.023) |
| Arousal | 0.165*** (0.046) | 0.879*** (0.056) | 0.250*** (0.048) |
| Credibility | ā0.100 (0.056) | 0.067 (0.046) | |
| Arousal*credibility | 0.334** (0.113) | 0.204* (0.092) | |
| AI anxiety | ā0.096*** (0.017) | ||
| R2 | 0.025 | 0.125 | 0.041 |
| Adj. R2 | 0.024 | 0.124 | 0.039 |
| Num. obs. | 2,415 | 2,415 | 2,415 |
Moderated mediation model (outcome: PEOU).
***pāÆ<āÆ0.001; **pāÆ<āÆ0.01; *pāÆ<āÆ0.05; PEOU is perceived ease of use.
Figure 4
Table 3
| Pathways | Expert credibility | Effect | BootSE | LLCI | ULCI |
|---|---|---|---|---|---|
| Arousal āAI anxiety ā PEOU | Credibility degreeāÆ=āÆ0 | ā0.0685 | 0.0151 | ā0.1001 | ā0.0411 |
| Credibility degreeāÆ=āÆ1 | ā0.1006 | 0.02 | ā0.1405 | ā0.0618 | |
| Index of moderated mediation | ā0.0321 | 0.0124 | ā0.0586 | ā0.0094 |
Indirect effects at different degree of expert credibility.
PEOU is perceived ease of use.
When it comes to PU, the whole model was significant (F(5, 2,417)āÆ=āÆ38.6365, pāÆ<āÆ0.001, R2 = 0.074), the moderating effect of credibility was significant (see Table 4). The interaction effect of arousal and credibility on AI anxiety was also significant (BāÆ=āÆ0.317, tāÆ=āÆ2.8134, pāÆ<āÆ0.01). Moreover, at different degrees of credibility, the indirect (suppressing) effect of AI anxiety was significant [indirect effectāÆ=āÆā0.0255; 95% confidence interval (CI): (ā0.0493, ā0.0063)] (see Table 5). The result proved that H4 is not supported.
Table 4
| Moderator | Outcome | Mediator | Outcome |
|---|---|---|---|
| (1) PU | (2) AI anxiety | (3) PU | |
| (Intercept) | 4.675*** (0.023) | 3.734*** (0.028) | 4.675*** (0.022) |
| Attitude | 0.223*** (0.023) | ā0.273*** (0.028) | 0.201*** (0.023) |
| Arousal | 0.320*** (0.045) | 0.865*** (0.056) | 0.389*** (0.047) |
| Credibility | ā0.100 (0.056) | 0.179*** (0.045) | |
| Arousal*credibility | 0.317** (0.113) | 0.162 (0.090) | |
| AI anxiety | ā0.080*** (0.016) | ||
| R2 | 0.057 | 0.124 | 0.074 |
| Adj. R2 | 0.056 | 0.123 | 0.072 |
| Num. obs. | 2,423 | 2,423 | 2,423 |
Moderated mediation model (outcome: PU).
PU is perceived usefulness. ***pāÆ<āÆ0.001; **pāÆ<āÆ0.01; *p <āÆ0.05.
Table 5
| Pathways | Expert credibility | Effect | BootSE | LLCI | ULCI |
|---|---|---|---|---|---|
| Arousal āAI anxiety ā PU | Credibility degreeāÆ=āÆ0 | ā0.0568 | 0.0148 | ā0.0885 | ā0.0305 |
| Credibility degreeāÆ=āÆ1 | ā0.0823 | 0.0198 | ā0.1215 | ā0.0449 | |
| Index of moderated mediation | ā0.0255 | 0.0109 | ā0.0493 | ā0.0063 |
Indirect effects at different degree of expert credibility.
PU is perceived usefulness.
Conclusion and discussion
We acknowledge that the impact of emotional communication on users goes far beyond the level of perception. In terms of communication effectiveness, emotional communication displays distinct characteristics across various media contexts. Offline attitudes towards the same event are more rational and evenly distributed than online attitudes, while online attitudes are prone to polarization (Yong et al., 2016). Through extensive research on Weibo, it has been discovered that emotionally charged Weibo content is more in line with usersā cognitive patterns and thus more likely to attract audiences to share (Zhao and Tu, 2012). This study investigates how online opinion leaders on Weibo influence public perceptions of AI technologies through two key dimensions: (1) their expert status in AI-related fields, and (2) the emotional arousal levels of their messages. Drawing on the Technology Acceptance Model (TAM), we measured perception outcomes through two core components: perceived usefulness (PU) and perceived ease of use (PEOU). Furthermore, the research incorporates usersā emotional disposition by examining the mediating role of AI anxiety throughout the cognitive evaluation process.
The research demonstrates that varying levels of emotional arousal significantly influence usersā perceptions of AI technology. Furthermore, the expert status of online opinion leaders exerts a measurable impact on shaping these perceptions. Notably, AI anxiety of users were found to exert a suppressive effect throughout this cognitive evaluation process.
The impact of online opinion leadersā expert credibility on userās perception
The results indicate that the expert credibility of online opinion leaders has an impact on users, whether it pertains to their PEOU or PU. In the diffusion of controversial technological innovations, experts influence the rate and extent of acceptance by serving as opinion leaders (Leonard-Barton, 1985). Contrary to expectations, negative messages disseminated by high-expert credibility opinion leaders were found to paradoxically enhance usersā perceptions of AI technology as being more user-friendly and beneficial, compared to messages from low-expert credibility opinion leaders.
Users have developed a state of distrust toward experts in specialized fields or individuals with high credibility, especially as attacks on experts via social media have been proven to negatively impact their credibility (Gierth and Bromme, 2020). When people are mistrustful, they spontaneously activate associations that are incongruent with the given message (Schul et al., 2004). This inconsistency arising from distrust has once again been confirmed in our experiments. Meanwhile, online opinion leaders increase the speed of the information stream and the adoption process itself (Eck et al., 2011). Thus, it enhances usersā exposure to such information, thereby increasing the likelihood of their perceptions and adoption, irrespective of their attitudes towards the information.
Effect of emotional arousal level
Experimental evidence confirms that negative messages with heightened emotional arousal levels significantly increase the likelihood of users perceiving AI technology as more advanced and user-friendly, thereby elevating scores on the Technology Acceptance Model (TAM) scale. This phenomenon may be attributed to the intensification of negative information, which prompts users to adopt a more cautious stance, thereby cultivating a more conservative perspective. When individuals make decisions, various factors may lead to cautious shift or risky shift (Stoner, 1961). The concept of group polarization in most studies belongs to the ārisky shiftā and assumes that negative, high-arousal emotions may lead to irrational group polarization (Liao et al., 2023). This suggests that negative emotions characterized by high-arousal levels might typically elicit a ārisky shiftā in behavior. However, our experimental findings indicate the possibility of a ācautious shift.ā However, while high-arousal emotions tend to attract more attention from users, however, user are prone to consciously reject such information due to their emotions and subsequently gather arguments to refute it (Kunda, 1990). Higher affective intensity provokes motivated reasoning, which in turn leads to opinion polarization (Asker and Dinas, 2019), reinforcing userās commitment to their own stances.
Secondly, our research findings demonstrate that within the context of negative tone, high-arousal emotion serves to elevate userās vigilance and prompt them to adopt a more cautious attitude when assessing messages. Consequently, as the intensity of arousal associated with negative information escalates, the persuasive effectiveness of the information diminishes progressively. These discoveries offer a certain degree of supplementation to previous research endeavors.
Interaction effect on userās perception
The results of the interaction effect show that, at the level of PEOU, the interaction between the expertise credibility of online opinion leaders and emotional arousal has a significant impact on usersā perception regarding AI technology. When users receive information from high-credibility experts, high-arousal negative content actually makes users perceive AI technology as easier to use. However, this difference is not pronounced when users receive information from non-experts. This has been attributed to the influence of anti-intellectualism (White, 1962). When we harness modern technology to serve us, we are also implicitly consenting to, complying with, or being compelled to accept various rules set by modern technology at both the cognitive and value levels (Ge, 2023). This can elicit feelings⦠Especially when users receive information from so-called āexpertsā, this anti-intellectual sentiment may intensify, resulting in a paradoxical effect where the more users receive highly arousing negative information from these experts, the stronger their positive perceptions become. Since PEOU influences both PU and attitude toward using (Davis, 1989), usersā attitude toward AI technology may be jointly influenced by the expertise credibility of online opinion leaders and the level of emotional arousal, representing the combined effect of anti-intellectual sentiment and cautious bias.
For PU, the interaction effect further demonstrates that, unlike PEOU which is influenced by the credibility of online opinion leaders, userās perception of usefulness of AI technology is also affected by the degree of emotional arousal of these leaders, even when they lack professional credentials. Specifically, as the level of negative emotional arousal of online opinion leaders increases, userās PU also rises, meaning they are more inclined to believe that AI technology is more useful. In other words, in terms of PU, users are less influenced by the credibility of the source but more by emotions. Research shows that PU affects behavior intention (Yong et al., 2016). It can be stated that the level of arousal of negative emotions primarily influences the perception aspects that are more behavior-related, while its impact on attitude intention is not confirmed.
The moderating effect of online opinion leadersā expert credibility degree on usersā AI anxiety
The study finds that expert online opinion leaders can, to a certain extent, reduce usersā levels of AI anxiety, especially when the emotional arousal of the negative information they convey is low. Specifically, the first half of the influence of informationās emotional arousal (see Figure 3) on usersā perception of AI technology through their level of AI anxiety is affected by the expertise credibility of online opinion leaders: regardless of the expertise credibility of online opinion leaders from whom the negative information originates, the information will reduce usersā positive perception of AI technology, including PEOU and PU, by increasing their level of AI anxiety. However, when the information comes from non-experts, this effect is more pronounced at low levels of emotional arousal compared to when the source is experts. As the level of emotional arousal increases, this effect tends to converge and gradually intensifies from non-experts to experts. The impact of the interplay between the expertise credibility of online opinion leaders and the emotional arousal of information on usersā AI anxiety can be ranked as follows (see Figure 5).
Figure 5
Overall, high-credibility online opinion leaders tend to have a lighter impact on userās level of AI anxiety compared to low-credibility online opinion leaders. Moreover, when the public places trust in risk managers and experts, communication flows more smoothly; conversely, in the absence of trust, communication encounters greater challenges (Fessenden-Raden et al., 1987). Yudkowsky (2008) suggested that AI may raise global risks, including the risk of super AI destroying humans and the risk of the rapid evolution of AI. When confronted with such risks, the public is unlikely to place trust in online āexpertsā.
The suppressing effect of AI anxiety
The results of the moderated mediation analysis reveal that usersā AI anxiety levels exert a suppressing effect on the relationship between the emotional arousal intensity conveyed by online opinion leaders and usersā technological perception.
After receiving information from online opinion leaders with varying degrees of expert credibility, users experience an increase in their AI anxiety, which subsequently reduces their positive perception of AI technology, making them more inclined to believe that the AI technology has become less user-friendly and more difficult to use. Negative affective states, particularly those of high arousal, may prime anxiety-related schemas through automatic affective priming (Fazio et al., 1986). This process limits attentional resources to threat-relevant cues (Easterbrook, 1959), resulting in a narrowed cognitive scope that amplifies risk perceptions.
Technophobia, similar to AI anxiety, as a reaction to the interaction between humanās internal crises and the negative effects of technology, is not merely a result of a negative emotional response; it also encompasses the subjectās active coping process (Wang, 2024), and technophobia also manifests as technology anxiety and technology stress (Lei and Baohua, 2014). As userās level of AI Anxiety intensifies, their assessment of AI technologyās usefulness and ease of use diminishes, indicating that the subject may be coping with the potential fear induced by AI technology through reducing their acceptance of it. Research has already proved that the emotions of communicators in cyberspace can influence the emotions of information recipients, further impacting their subsequent behaviors.
The anxiety audience experience when they communicate with others usually is based on negative expectations (Gudykunst and Nishida, 2001). One of the behavioral consequences of anxiety is avoidance (Stephan and Stephan, 1985). After userās AI Anxiety intensifies, it causes them to avoid perceiving AI. While AI Anxiety is a comprehensive concept encompassing various negative emotions such as fear, anxiety, worry, and more. Under increasing levels of fear, thereās no increase in acceptance of beliefs about the proper type of toothbrush to use (Janis and Feshbach, 1953) and stop smoking (Leventhal and Niles, 1964). Indeed, fear related to AI does cause user to diminish their perception of AI technologyās usefulness and ease of use. Based on this, users may decrease their behavioral intentions towards AI technology. Research has shown that some minimal amount of fear is necessary for behavior change, but that further increases in fear do not affect change (Leventhal and Niles, 1965). Most of the above research findings are in the health field, but our experiment has broadened the field to sociotechnical imaginaries. As a comprehensive negative emotional concept, which factor among them is the most influential remains to be further explored. However, our study takes AI Anxiety as a mediator variable, providing a new integrated perspective for revealing the mechanism of how expert and online opinion leadersā emotional communication influences usersā perception.
Based on these findings, science communication practitioners should manage the emotional tone in AI discussions: First, fully utilize the āexpert credibility paradox effectāānegative information delivered by experts can enhance usersā perception of technology ease of use; Second, implement graded emotional arousal modulation by matching arousal intensity to different scenarios to avoid irrational cautious shifts triggered by high-arousal emotions; Finally, establish a ātrust savings mechanismāāaccumulate social capital through routine transparent communication, and immediately provide concrete action plans when monitoring shows user anxiety exceeds critical thresholds, converting anxiety into engagement momentum.
Limitations and directions for future research
The study employed an experimental design with participants recruited exclusively from university settings using simulated Weibo platform materials. This approach resulted in sample size limitations and experimental findings whose generalizability should be interpreted cautiously. Given the constrained sampling frame (Chinese university students) and platform-specific stimuli (Weibo simulations), extrapolations to broader populations or different social media platforms require careful consideration. Consequently, future research should extend these findings through rigorous cross-cultural validation across diverse user demographics.
This research focuses on the theme of AI technology, a field that is experiencing rapid development and significantly impacting todayās society. However, since emotional communication can be applied to many other fields, and emotion includes more than just anxiety (which was the focus of this study), future research is encouraged to investigate whether these findings can be extended to other areas or to explore additional diverse outcomes. Furthermore, future exploration may focus more intently on the mechanisms underlying the various dimensions of emotional communication within social networks. Emotional communication is closely related to several pivotal theories; therefore, based on different theoretical perspectives, future studies can be expanded into different dimensions of emotional communication.
In addition, as this study pointed out, negative expressions about AI technology from online opinion leaders with expertise credibility actually increased usersā PEOU and PU of AI technology. This contradicts traditional research, which suggests that online opinion leaders with professional authority are more likely to influence users to accept their viewpoints. The underlying reasons for this phenomenon may be the distinctive nature of AI-related issues, a possible decline in the group influence of expert online opinion leaders in contemporary society, or even the resistance effect triggered by decreasing trust. Therefore, more empirical studies on different topics are needed in the future to further examine the expertise credibility and degree of online opinion leaders, and this may vary across different social contexts. Furthermore, future research could explore other moderating variables of online opinion leaders beyond credibility, such as likability and familiarity.
Finally, this study also examines the mediating role of AI anxiety. This concept shares some similarities with technophobia, as both focus on usersā negative emotions. However, from the perspective of emotional communication mechanisms, both concepts are relatively broad and contain a wide range of different emotions. Therefore, future research could delve into some specific emotions such as fear and boredom, which would help further clarify the role of AI anxiety or technophobia in communication. Specifically, the study examines the different properties of online opinion leaders and the emotional mechanisms involved. Additionally, considering the role of usersā own emotions, the analysis incorporates AI anxiety.
Statements
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://osf.io/qc3z5/?view_only=40f9f326008c45cd903ad49de34bcf2c.
Author contributions
WL: Formal analysis, Writing ā original draft, Data curation, Conceptualization, Validation, Methodology, Investigation. YJ: Methodology, Conceptualization, Writing ā review & editing, Validation. WD: Funding acquisition, Project administration, Resources, Writing ā review & editing, Supervision. AT: Software, Data curation, Writing ā review & editing.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. The research was supported by the open project the Emotional Communication Mechanism of Online Opinion Leaders and the Impact on Audience Attitude and Behavior (41004400/002) of the Open Research Fund of the Shanghai Key Laboratory of Brain-Machine Intelligence for Information Behavior under Grant.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The authors declare that no Gen AI was used in the creation of this manuscript.
Publisherās note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
AskerD.DinasE. (2019). Thinking fast and furious: emotional intensity and opinion polarization in online media. Public Opin. Q.83, 487ā509. doi: 10.1093/poq/nfz042
2
BaiS.XiaoB. (2011). Affection mobilization of the Sina microbloggers. J. Lanzhou Univ.39, 60ā68.
3
BaumeisterR. F.BratslavskyE.FinkenauerC.VohsK. D. (2001). Bad is stronger than good. Rev. Gen. Psychol.5, 323ā370. doi: 10.1037/1089-2680.5.4.323
4
BesleyJ. C.HillD. Science and Technology: public attitudes, knowledge, and interest. Science and Engineering Indicators 2020. National Science Foundation (2020). Available online at: https://files.eric.ed.gov/fulltext/ED612113.pdf
5
BradleyM. M.LangP. J. (1994). Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry25, 49ā59. doi: 10.1016/0005-7916(94)90063-9
6
CaveS.CoughlanK.DihalK., ""Scary robots" examining public responses to AI." Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. (2019).
7
ChengT. C. E.LamD. Y. C.YeungA. C. L. (2006). Adoption of internet banking: an empirical study in Hong Kong. Decis. Support. Syst.42, 1558ā1572. doi: 10.1016/j.dss.2006.01.002
8
ChoiS.LeeC.-j.ParkA.LeeJ. A. (2024). How the public makes sense of artificial intelligence: the interplay between communication and discrete emotions. Sci. Commun.47, 553ā584. doi: 10.1177/10755470241297664
9
DavisF. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. Manag. Inf. Syst. Q.13, 319ā340. doi: 10.2307/249008
10
EasterbrookJ. A. (1959). The effect of emotion on cue utilization and the organization of behavior. Psychol. Rev.66, 183ā201. doi: 10.1037/h0047707
11
EckV.PeterS.JagerW.LeeflangP. S. H. (2011). Opinion leaders' role in innovation diffusion: a simulation study. J. Prod. Innov. Manag.28, 187ā203. doi: 10.1111/j.1540-5885.2011.00791.x
12
FƤhnrichB.LüthjeC. (2017). Roles of social scientists in crisis media reporting: the case of the German populist radical right movement PEGIDA. Sci. Commun.39, 415ā442. doi: 10.1177/1075547017715472
13
FazioR. H.SanbonmatsuD. M.PowellM. C.KardesF. R. (1986). On the automatic activation of attitudes. J. Pers. Soc. Psychol.50:229. doi: 10.1037/0022-3514.50.2.229
14
FengG. C. (2024). Effects of narratives and information valence in digital headlines on user responses. Asian J. Commun.34, 156ā177. doi: 10.1080/01292986.2024.2317308
15
Fessenden-RadenJ.FitchenJ. M.HeathJ. S.Providing risk information in communities: factors influencing what is heard and acceptedSci. Technol. Hum. Values1294ā101 (1987). Available online at: https://www.jstor.org/stable/689388
16
FuX.LiQ.. "Ideological and political education of minority students based on network information security model." 2022 international conference on educational innovation and multimedia technology (EIMT 2022). Atlantis Press, (2022).
17
GeY. (2023). How did technology experts lose public trust?āthe technological origins of Western anti-intellectualism and contemporary reflections. Studi. Philos. Sci. Technol.40, 116ā121.
18
GerberA. J.PosnerJ.GormanD.ColibazziT.YuS.WangZ.et al. (2008). An affective circumplex model of neural systems subserving valence, arousal, and cognitive overlay during the appraisal of emotional faces. Neuropsychologia46, 2129ā2139. doi: 10.1016/j.neuropsychologia.2008.02.032
19
GierthL.BrommeR. (2020). Attacking science on social media: how user comments affect perceived trustworthiness and credibility. Public Underst. Sci.29, 230ā247. doi: 10.1177/0963662519889275
20
GoldenbergJ.LehmannD. R.ShidlovskiD.BarakM. M. (2006). The role of expert versus social opinion leaders in new product adoption. Mark. Sci. Inst. Rep.6, 67ā84.
21
GoldsmithR. E.LaffertyB. A.NewellS. J. (2000). The impact of corporate credibility and celebrity credibility on consumer reaction to advertisements and brands. J. Advert.29, 43ā54. doi: 10.1080/00913367.2000.10673616
22
GudykunstW. B.NishidaT. (2001). Anxiety, uncertainty, and perceived effectiveness of communication across relationships and cultures. Int. J. Intercult. Relat.25, 55ā71. doi: 10.1016/S0147-1767(00)00042-0
23
HassaneinK.HeadM. (2007). Manipulating perceived social presence through the web interface and its impact on attitude towards online shopping. Int. J. Hum.-Comput. Stud.65, 689ā708. doi: 10.1016/j.ijhcs.2006.11.018
24
HayesA. F. (2013). Mediation, moderation, and conditional process analysis. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach.16, 12ā20.
25
HouckA. M.KingA. S.TaylorJ. B. (2025). The effect of experts on attitude change in public-facing political science: scientific communication on term limits in the United States. Public Underst. Sci.34, 19ā37. doi: 10.1177/09636625241246084
26
HummC.SchrƶgelP.LeĆmƶllmannA. (2020). Feeling left out: underserved users in science communication. Media Commun.8, 164ā176. doi: 10.17645/mac.v8i1.2480
27
JanisI. L.FeshbachS. (1953). Effects of fear-arousing communications. J. Abnorm. Soc. Psychol.48, 78ā92. doi: 10.1037/h0060732
28
JasanoffS. (2015). āFuture imperfect: science, technology, and the imaginations of modernityā in Dreamscapes of modernity: sociotechnical imaginaries and the fabrication of power, 1ā33.
29
KƶnigL.JucksR. (2019). Influence of enthusiastic language on the credibility of health information and the trustworthiness of science communicators: insights from a between-subject web-based experiment. Interact. J. Med. Res.8:e13619. doi: 10.2196/13619
30
KundaZ. (1990). The case for motivated reasoning. Psychol. Bull.108, 480ā498. doi: 10.1037/0033-2909.108.3.480
31
LeiZ.BaohuaX. (2014). The structure and formation model of technophobia. J. Dialectics Nat.36, 70ā127.
32
Leonard-BartonD. (1985). Experts as negative opinion leaders in the diffusion of a technological innovation. J. Consum. Res.11, 914ā926. doi: 10.1086/209026
33
LeventhalH.NilesP. (1964). A field experiment on fear arousal with data on the validity of questionnaire measures 1. J. Pers.32, 459ā479. doi: 10.1111/j.1467-6494.1964.tb01352.x
34
LeventhalH.NilesP. (1965). Persistence of influence for varying durations of exposure to threat stimuli. Psychol. Rep.16, 223ā233. doi: 10.2466/pr0.1965.16.1.223
35
LiJ.HuangJ.-S. (2020). Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory. Technol. Soc.63:101410. doi: 10.1016/j.techsoc.2020.101410
36
LiaoS.ChengJ.YuJ. (2023). User interaction and group polarization in online opinion expression: emotions as a mediating variable. Chin. J. J. Commun.45, 91ā117.
37
LidskogR.BergM.GustafssonK. M.LƶfmarckE. (2020). Cold science meets hot weather: environmental threats, emotional messages and scientific storytelling. Media Commun.8, 118ā128. doi: 10.17645/mac.v8i1.2432
38
LuptonD. (2017). āDownload to deliciousā: promissory themes and sociotechnical imaginaries in coverage of 3D printed food in online news sources. Futures93, 44ā53. doi: 10.1016/j.futures.2017.08.001
39
NaskarD.SinghS. R.KumarD.NandiS.RivaherreraE. O. d. l. (2020). Emotion dynamics of public opinions on twitter. ACM Trans. Inf. Syst.38, 1ā24. doi: 10.1145/3379340
40
NiemannP.BittnerL.SchrƶgelP.HauserC. (2020). Science slams as edutainment: a reception study. Media Commun.8, 177ā190. doi: 10.17645/mac.v8i1.2459
41
Pew Research Center. (2017). Science news and information today. Available online at: https://www.pewresearch.org/journalism/2017/09/20/science-news-and-information-today/
42
PielkeR. A.Jr. (2007). The honest broker: making sense of science in policy and politics: Cambridge University Press. Minerva 46, 485ā489.
43
PlutchikR. (1984). Emotions and imagery. J. Ment. Imagery. 8, 105ā111.
44
QuinlanA. (2021). The rape kitās promise: techno-optimism in the fight against the backlog. Sci. Cult.30, 440ā464. doi: 10.1080/09505431.2020.1846696
45
RogersE. M.CartanoD. G. (1962). Methods of measuring opinion leadership. Public Opin. Q.26, 435ā441. doi: 10.1086/267118
46
RosenL. D.SearsD. C.WeilM. M. (1987). Computerphobia. Behav. Res. Methods Instrum. Comput.19, 167ā179. doi: 10.1016/0747-5632(94)00021-9
47
RozinP.RoyzmanE. B. (2001). Negativity bias, negativity dominance, and contagion. Personal. Soc. Psychol. Rev.5, 296ā320. doi: 10.1207/S15327957PSPR0504_2
48
RussellJ. A.MehrabianA. (1977). Evidence for a three-factor theory of emotions. J. Res. Pers.11, 273ā294. doi: 10.1016/0092-6566(77)90037-X
49
SchulY.MayoR.BurnsteinE. (2004). Encoding under trust and distrust: the spontaneous activation of incongruent cognitions. J. Pers. Soc. Psychol.86, 668ā679. doi: 10.1037/0022-3514.86.5.668
50
StephanW. G.StephanC. W. (1985). Intergroup anxiety. J. Soc. Issues41, 157ā175. doi: 10.1111/j.1540-4560.1985.tb01134.x
51
StonerJ. A. F. (1961). A comparison of individual and group decisions involving risk. Dissertation: Massachusetts Institute of Technology.
52
TaddickenM.ReifA. (2020). Between evidence and emotions: emotional appeals in science communication. Media Commun.8, 101ā106. doi: 10.17645/mac.v8i1.2934
53
TaddickenM.WolffL. (2020). āFake newsā in science communication: emotions and strategies of coping with dissonance online. Media Commun.8, 206ā217. doi: 10.17645/mac.v8i1.2495
54
Van KleefG. A.Van DoornE. A.HeerdinkM. W.KoningL. F. (2011). Emotion is for influence. Eur. Rev. Soc. Psychol.22, 114ā163. doi: 10.1080/10463283.2011.627192
55
VisentinM.PizziG.PichierriM. (2019). Fake news, real problems for brands: the impact of content truthfulness and source credibility on consumersā behavioral intentions toward the advertised brands. J. Interact. Mark.45, 99ā112. doi: 10.1016/j.intmar.2018.09.001
56
WangB. (2024). Technophobia: traceability, evaluation and values. Chengdu: Sichuan People's Publishing House, 2024.
57
WhiteM.. Reflections on anti-intellectualism. Daedalus (1962): 457ā468. Available online at: https://www.jstor.org/stable/20026723
58
YongL.MengsiC.KaiZ. (2016). Influencing factors and differences of the online and offline emotional spread of social network users: "female driver beaten by a male driver in Chengdu" case as an example. J. Intelligence6, 80ā85.
59
YudkowskyE. (2008). āArtificial intelligence as a positive and negative factor in global riskā in Global catastrophic risks. 1:184.
60
ZhaoY.TuL. (2012). The functions of music in media convergence product: an exploratory experiment on electronic magazine. J. Int. Commun.7, 65ā71.
61
ZhouQ.NingY. (2020). Arousal, valence, and dominance: reconstructing the path of political communication on twitter under the influence of emotions. Modern Commun.42, 53ā59.
62
ZhuangX.ZhangC. (2022). Discourse construction of scientific issues and their societal technoscientific imaginaries: a case study of Weibo discussions on Pfizer's COVID-19 vaccine. Shanghai Journal. Rev.3, 14ā23+85.
Summary
Keywords
online opinion leader, expert credibility, AI anxiety, emotional communication, perception
Citation
Liu W, Jiang Y, Deng W and Tan A (2025) Expertise and emotion: how online opinion leaders shape public perceptions of AIāamong university students in China. Front. Commun. 10:1640957. doi: 10.3389/fcomm.2025.1640957
Received
04 June 2025
Accepted
14 July 2025
Published
28 July 2025
Volume
10 - 2025
Edited by
Ataharul Chowdhury, University of Guelph, Canada
Reviewed by
Wei Fang, Beijing Information Science and Technology University, China
Michael Christian, University of Bunda Mulia, Indonesia
Updates
Copyright
Ā© 2025 Liu, Jiang, Deng and Tan.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Weijia Deng, dwjdd@shisu.edu.cn
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.