- 1School of Journalism and Communication, Shanghai International Studies University, Shanghai, China
- 2Honors College, Shanghai International Studies University, Shanghai, China
- 3Key Laboratory of Brain-Machine Intelligence for Information Behavior (Ministry of Education and Shanghai), School of Business and Management, Shanghai International Studies University, Shanghai, China
The rapid development of AI technology has triggered intense discussions on social media. As key users, online opinion leaders (OILs) wield “emotional power” that exhibits an “emotion setting” effect, influencing users’ perceptions of AI, with their “expert” identity playing a crucial role in emotional communication. To examine the impact of OILs’ expert credibility and emotional arousal level on users’ AI perception, an experimental study (N = 102) was conducted. Results show that under a negative tone, higher-expert-credibility OILs led participants to perceive AI as more useful (PU) and easier to use (PEOU). Similarly, higher emotional arousal strengthened these perceptions. Notably, for high-credibility OILs, the arousal-credibility interaction significantly affected both PEOU and PU, whereas for low-credibility OILs, it impacted PU but not PEOU. Furthermore, AI anxiety mediates the arousal-perception relationship, moderated by expert credibility. Critically, emotional arousal significantly influenced AI anxiety regardless of credibility level. This study elucidates how OILs shape sociotechnical imaginaries amid rapid AI advancement.
Introduction
Artificial Intelligence (AI) technology has now infiltrated all aspects of people’s lives, and its potential has not only captured the interest of the industry but also sparked widespread public imagination regarding its applications. However, the scholars soon recognized that this initial definition focused primarily on the national policy dimension, overlooking the various other ways that technologies can shape social life. Jasanoff (2015) defines sociotechnical imaginaries as ‘collectively held, institutionally stabilized, and publicly performed visions of desirable futures’—visions animated by shared understandings of socially attainable life forms reinforced through technological advancement. Precisely within this conceptual framework, social media era experts and opinion leaders co-construct such shared understandings with users through dual pathways of knowledge negotiation and emotional resonance.
One of the most essential contributions scientists can make is helping the public understand when the truth (or, at least, experts’ best understanding of it) is counterintuitive (Houck et al., 2025). From the perspective of media content, Lupton notes that news reports can shape these imaginaries through positive framing (Lupton, 2017). Other research has focused on the communication process, highlighting that media articles and social media campaigns are equally critical contributors (Quinlan, 2021).
Experts often act as online opinion leaders, disseminating information about emerging technologies (including AI) to users via social media platforms. Opinion leaders can guide these imaginaries on social media platforms from a professional standpoint (Zhuang and Zhang, 2022). Within this context, emotions play a dual role in science communication: as a catalyst for engagement, they enhance the accessibility of complex concepts through resonance effects; yet as a potential liability, excessive emotional framing may compromise factual accuracy. This dichotomy is exemplified by AI fear-mongering narratives that obscure genuine technological progress while amplifying public anxiety.
To elicit more valid responses from participants, we designed the experiment to simulate real-life social media browsing scenarios, aiming to examine how online opinion leaders’ expert credibility and messages conveying varying emotional arousal levels shape users’ perceptions of AI technology in social media contexts. Specifically, adopting the lens of science communication on social media, we integrate the framework of online opinion leadership to investigate the multifaceted identities of these digital influencers and their associated emotional engagement mechanisms. Additionally, recognizing that users also possess agency, the study includes an analysis of the role of AI anxiety.
This study makes significant contributions to the field of science communication and public perception of emerging technologies. Theoretically, it provides the first empirical evidence in social media contexts demonstrating how opinion leaders’ influence operates through a dual-path mechanism: the interaction between expert credibility and emotional arousal in shaping AI perceptions, thereby addressing a critical gap in social media-based science communication research. Furthermore, by incorporating “AI anxiety” as a key variable, our work overturns the communicator-centered paradigm, empirically validating the mediating role of audiences in technology perception formation. This study also provides practical solutions for science communication: It gives communicators a clear method to balance emotional content and trustworthiness. It also helps create systems for demonstrating expertise credentials and supports educational programs that teach both facts and feelings about science. These changes let experts and the public work together to shape how people view AI, moving beyond just fixing misunderstandings to building shared understanding of technology.
The impact of opinion leader’s expert credibility in shaping users’ perception
When scientists communicate with the public, they take on one of several roles, which can complicate their task (Fähnrich and Lüthje, 2017). And an “issue advocate” or “public intellectual” classifies events or focuses on research implications for a particular political agenda. Finally, scientists might take the position of “science arbiter” or “honest broker” (Pielke, 2007). In communication research, according to the definition of “opinion leader,” these scientists or a “public intellectual” can definitely be classified as such.
The identity and characteristics of opinion leaders have long attracted the attention of scholars, and various research methods have been used to identify opinion leaders (Rogers and Cartano, 1962). Returning to Lazarsfeld’s original research, opinion leaders are closely related to the grasp of political information and are very familiar with this field. Some scholars define opinion leaders as “actually experts and/or social connectors who are active participants in online and offline communities” (Goldenberg et al., 2006). In terms of expert identity, credibility is an important basis for judgment. Credibility refers to the degree to which an individual is perceived to possess relevant expertise in a particular subject and can be relied upon to provide an objective assessment of the subject; accordingly, a credible source is identified as a communication medium known for its accurate information (Goldsmith et al., 2000; Visentin et al., 2019). In the case study of the COVID-19 vaccine, Chinese scholars also point out that professional online opinion leaders can amplify the public’s social and technological imagination (Zhuang and Zhang, 2022). Based on existing research, it is generally believed that the credibility of opinion leaders can enhance their persuasive effect.
Extant research has predominantly focused on how opinion leaders positively promote audience adoption, while largely neglecting whether negative content from opinion leaders may trigger audience rejection of related technologies or products. However, it is noteworthy that negative emotions typically exert stronger influence than positive ones (Baumeister et al., 2001), and a widely recognized negativity bias exists (Rozin and Royzman, 2001). To address this gap, the present study explores negative expression effects and proposes the following research hypothesis:
H1: Negative expressions by high-expert credibility online opinion leaders regarding AI technology exert an increasing effect on users’ perceptions of the technology compared to those from low-expert credibility online opinion leaders.
The role of emotional arousal
Emotion refers to people’s attitude toward objective things and their corresponding behavioral reactions (Plutchik, 1984). According to Emotion As Social Information (EASI) theory, emotional expressions provide information to observers, which may influence their cognition, attitudes, and behavior (Van Kleef et al., 2011).
If science communication relies solely on factual information (the “deficit model”), it often fails to effectively reach the public (Taddicken and Reif, 2020). Emerging “public engagement” models demonstrate that emotional narratives (e.g., science slams, edutainment videos) can lower participation barriers, stimulate positive emotions, and thereby enhance the appeal of scientific topics (Niemann et al., 2020). Emotions serve not merely as communication tools but also as bridges connecting science with the public. For instance, “hope narratives” in environmental issues are more effective at driving behavioral change than mere risk warnings (Lidskog et al., 2020). Negative emotions (e.g., fear, alienation) may widen the gap between the public and science, particularly as marginalized groups actively disengage due to “emotional distance” (Humm et al., 2020).
The public seeks out and/or encounters information about science and technology from various sources, ranging from television, newspapers, and social media to interpersonal relationships (Besley and Hill, 2020; Pew Research Center, 2017). The role that emotion plays in social media communication has been recognized by scholars, and extensive research across multiple platforms has been conducted. Social media, as “emotion media,” can amplify science communication through resonance, yet they may also fuel the spread of misinformation via emotional polarization (Taddicken and Wolff, 2020). A study on Twitter confirms that there is an emotion flow underneath the Twitter network, and the emotion of public opinion does have an influence on user’s individual emotion (Naskar et al., 2020). A study on Weibo, China’s most popular social media platform, also demonstrates that online opinion leaders are not objective ‘infomediaries,’ but influence users through emotional contagion (Bai and Xiao, 2011; Fu and Li, 2022).
For more in-depth research and analysis of emotion, PAD model has been widely adopted, which was first developed by Russell and Mehrabian (1977). Given the fact that human nervous system processes only two dimensions of emotion—arousal and valence during interactions (Gerber et al., 2008), and dominance has not been examined to the same extent as the other two factors, valence and arousal have been primarily focused on. On social media platforms, no matter in Chinese or in English, scholars find that valence and arousal do have a positive impact on user response, which is conducive to communications (Zhou and Ning, 2020; Feng, 2024). In this study, we mainly examine the role of emotional arousal in affective dimensions. Based on this, the second research hypothesis is proposed:
H2: When online opinion leaders express negative views on AI technology, higher levels of negative emotional arousal have a stronger impact on users’ perceptions of AI technology.
Meanwhile, studies find cues that emotional language can harm the trustworthiness of scientists as well as the credibility of their arguments (König and Jucks, 2019). Conceptually, arousal degree should be distinct from expert credibility because arousal degree refers specifically to the emotional attributes of information and is independent of the identity of online opinion leaders. This research therefore seeks to examine which factor—the expert credibility of online opinion leaders or the arousal degree of a social media context—plays a more significant role in shaping users’ AI technology perception, while also investigating how these two factors jointly influence and interact with such perceptions, leading to the following hypothesis:
H3: The interaction between emotional arousal and expert credibility degree has a significant influence on user’s perceptions of AI technology.
Users’ AI anxiety
Although this study is not a typical persuasion study, its model path is similar to that. Both of them start from media information and examine their impact on users. Since the path of communication is not smooth all the way, exploring the mediating role of various variables in it has become the focus of scholars. Generally speaking, these mediating variables mainly come from the source of information, the characteristics of the information text, or user itself.
From the perspective of the chain of emotional communication, this article have already involved the emotional characteristics of the information text, but user’s own emotional related variables have not been included in the investigation. The receptivity to scientific communication to change attitudes depends on an individual’s prior beliefs and commitment to them (Houck et al., 2025). The incongruence between media content and emotions in terms of valence produced greater media effects, the overall positive tone of mediated information about AI led to greater public support among people who harbor more negative emotion (anger) and less positive emotion (hope) (Choi et al., 2024). Reactions toward AI might be infused with discrete emotions, rather than subtler feelings, because narratives about AI have been around for a long time. Such narratives revolve around discrete emotions such as fear and hope (Cave et al., 2019).
In the history of technological development, people have long been afraid of or anxious about new technologies, and the term technophobia has appeared specifically, referring to the fear of the effects of technological developments on society or the environment. In the 1980s, the application of computers once triggered research on computer anxiety, referring to a disabling level of anxiety in individuals created by actual or even imagined interaction with computers or an internal dialogue that diminishes people’s abilities and undermines their self-confidence (Rosen et al., 1987). Compared with computers, the changes brought about by AI technology are more revolutionary and bring more ethical challenges related to human and machine. Scholars believe that AI anxiety cannot be simply regarded as an extension of previous technological anxiety, and requires special research. Based on this, the main factors behind AI anxiety are considered (Li and Huang, 2020). From the technophobia to the current AI anxiety, these concepts all emphasize that users’ own anxiety or fear will affect their perception of technology. Therefore, this study proposes the following research hypothesis:
H4: user’s AI anxiety will mediate the path between online opinion leaders’ expressions about AI technology and user’s perception of AI technology.
Users’ AI perception and technology acceptance model
As a dominant framework for operationalizing users’ perceptions of technology, TAM’s core constructs—Perceived Usefulness (PU) and Perceived Ease of Use (PEOU)—provide critical lenses for analyzing AI adoption. This model was proposed by Davis (1989), who applied rational behavior theory to study users’ acceptance of information systems. The original intention behind TAM was to examine key factors determining widespread computer adoption—a research objective directly relevant to our study.
With technological evolution, TAM has been consistently applied to measure user perceptions across emerging technologies. Concurrently, studies indicate that emotional narratives may reconfigure perceived risks/benefits beyond PU/PEOU parameters (Hassanein and Head, 2007). Specifically, narrative frameworks emphasizing ‘emotional connection’ or ‘spiritual resonance’ in technology promotion can reshape users’ risk–benefit assessments, potentially transcending traditional usefulness and ease-of-use boundaries.
Therefore, this study employs the TAM to operationalize perceptions of AI, specifically measuring the source characteristics of AI-related content and its embedded emotional arousal level to examine their interactive effects on perception formation.
Materials and methods
Participants
This study recruited participants for a behavioral experiment in May 2024. The recruitment information was primarily released in universities, and a total of 102 students participated in the experiment. Among the recruited participants, 54.9% are female and 45.1% are male. The age range of the subjects is 18–38 (Mage = 23.66, SD = 3.455). The proportion of undergraduate students is 35.3%. The proportion of master’s and doctoral students is 50 and 12.7%, respectively. The experiment was approved by the Ethics Committee of our university.
Design and procedure
This within-subjects study employed a 2 (expert credibility: high/low) × 2 (emotional arousal: high/low) factorial design with repeated measures across experimentally controlled stimuli varying in expert credibility and message emotional arousal. The experiment comprised 4 distinct conditions, each consisting of 6 stimulus messages.
At the experiment’s onset, participants were seated at 24 designated computers in the group laboratory, where they viewed pre-generated forged Weibo screenshots presented in random order. After each stimulus screenshot, a series of questions was displayed. Participants responded based on the Weibo screenshot they viewed previously. The 30-min experimental procedure was designed and administered using E-Prime 3.0.
Measures
All measures during the formal experiment were assessed using a 7-point Likert scale, wherein higher scores indicate stronger agreement with the concept being measured.
Outcomes
As introduced earlier, we examined users’ perceptions of AI technology using TAM. As shown in Table 1, each scale comprises 4 items (Cheng et al., 2006). To measure users’ perceptions of artificial intelligence, we contextually adapted the original “Internet Banking (IB)” scale items to “Artificial Intelligence (AI)” applications, preserving the original Likert-scale structure.
Mediating variables: AI anxiety
The measurement utilized a 4-item scale originally developed by Li and Huang (2020) (M = 2.47, SD = 1.39, Cronbach’s Alpha = 0.92). Questions include:
Whether participants felt:
1. AI might harm humans in pursuit of a specific goal.
2. I am concerned that artificial intelligence could pose substantial risks to society at large.
3. I am apprehensive that artificial intelligence will attain a level of consciousness equivalent to humans.
4. I would feel uneasy due to my inability to discern whether AI possesses human consciousness or not.
Covariant: attitude towards AI technology
Participants were asked, “What is your attitude towards AI technology?” using a 7-point Likert scale to assess their sentiments, where 1 represents the most negative stance and 7 the most positive. This question was measured before the experiment began.
Stimuli
Negative emotion and level of arousal
Building upon established findings on negativity bias (Rozin and Royzman, 2001), we designed experiments focusing on negative emotional responses to AI technology.
Firstly, a total of 200 messages were generated through ChatGPT 4.0 according to the following prompts:
1. “Write a review with a negative attitude on the theme of artificial intelligence. Requisition: colloquial language, strong emotional expression, and a length of 60–80 characters.”
2. “Write a review with a negative tone on the theme of artificial intelligence. Requisition: colloquial language, moderate emotional expression, and a length of 60–80 characters.”
3. “Since the specific usage scenarios of AI mostly revolve around AI Q&A, the following instruction were added: write a review with a negative tone on the theme of artificial intelligence, focusing on specific application scenarios other than AI Q&A. Requisition: colloquial language, strong emotional expression, and a length of 60–80 characters.”
4. “Since the generated content has highly similar sentence structures, further instruction was given as followed: express the above content in different sentence structures to vary the output.”
Second, the AI-generated messages underwent Chinese translation and manual linguistic polishing for fluency. Thirty trained annotators then rated each message’s emotional arousal using the Self-Assessment Manikin (SAM) (Bradley and Lang, 1994) 9-point pictorial scale (1 = low to 9 = high arousal). Messages were categorized as: high-arousal (mean score ≥ 5), low-arousal (mean score ≤ 4), with extreme scorers (top/bottom 15%) selected as experimental stimuli (12 per group). Meanwhile, to ensure the rigor of the experiment and eliminate the influence of valence, the valence of the selected materials was also manually assessed. The average valence of both sets (high-arousal and low-arousal) of stimuli is below 4.5 (of 9-point Likert scale), thus both belong to the low-valence category.
Expert credibility of online opinion leaders
This study initially identified online opinion leaders on the Weibo platform based on the following criteria: having over 300,000 followers (with some exceeding one million) and possessing the platform’s red ‘V’ verification badge. Among them, some were authentic experts meeting the following qualifications: (1) professionals skilled in utilizing AI tools who regularly post recommendations about AI technology products; (2) executives from leading AI companies; and (3) offline scientists with established academic careers, while others were non-experts.
After selecting 22 online opinion leaders based on the aforementioned criteria, we created 22 simulated visual profiles replicating their identity information while concealing authentic personal details (including profile photos and original usernames). These simulated profiles underwent additional manual verification to reconfirm their expert credentials through human rating. Raters completed an online questionnaire displaying screenshots of 22 fabricated opinion leader profiles, rating each influencer’s perceived expertise on a 5-point Likert scale. After calculating the average credibility score for each online opinion leader, we selected the 6 individuals with the lowest scores (ranging from 1.61 to 2.39) as low-expert credibility opinion leaders for the experimental control group. Conversely, the 6 individuals with the highest scores (ranging from 3.28 to 3.83) were designated as high-expert credibility opinion leaders.
For the formal experiment, we generated simulated microblog post screenshots incorporating the fabricated identity information from the aforementioned opinion leader profiles. These materials were systematically deployed as experimental stimuli to test user’s perception mechanisms. Each simulated online opinion leader was assigned one high-arousal stimulus message and one low-arousal stimulus message.
Results
Manipulation checking
To assess the efficacy of our arousal level manipulation, we instructed participants to evaluate their arousal level subsequent to the presentation of each message during the experiment. The outcomes of an independent-samples t-test revealed that the participants’ scores for the materials in the high-arousal group were significantly higher compared to those in the low-arousal group (p = 0, Mhigh-arousal = 4.928, Mlow-arousal = 3.940). This finding underscores the effectiveness of our manipulation.
Hypothesis testing
Hypothesis one predicted that reading messages from online opinion leaders with different degrees of credibility would strongly affect participants’ perception of technology. The results indicated that the main effect of online opinion leaders’ credibility degree was significant for both PEOU (F(1, 77) = 8.418, p < 0.01, η2p = 0.099) and PU (F(1, 82) = 25.751, p < 0.001, η2p = 0.239).
For PEOU, participants had a higher perception under high-credibility conditions than under low-credibility conditions (Mhigh-credibility = 3.91, SD = 1.16 versus Mlow-credibility = 3.83, SD = 1.15; t = 1.63, p > 0.05). Although the difference was insignificant, it still suggested a trend towards users who received messages from high-credibility online opinion leaders had more positive PEOU.
Regarding PU, the differences between low-credibility conditions and high-credibility conditions were more pronounced. Participants in the high-credibility condition reported a higher PU than those in the low-credibility condition (Mhigh-credibility = 4.77, SD = 1.11 versus Mlow-credibility = 4.58, SD = 1.18; t = 4.02, p < 0.001). This difference indicated that user who received messages from high-credibility online opinion leaders found AI technology more useful, compared to when they read messages from low-credibility online opinion leaders. Overall, these findings suggested that the credibility degree of online opinion leaders may play a role in shaping participants’ perception levels, particularly in terms of PU. Therefore, H1 is supported.
Hypothesis two predicted that participants exposed to messages with high-arousal emotions would report higher levels of AI perceptions than those exposed to messages with low-arousal levels. To test this hypothesis, we employed the same method as for H1. As expected, the repeated-measures ANOVA results revealed that the main effects of arousal level were significant for both PEOU (F(1, 77) = 21.433, p < 0.001, η2p = 0.218) and PU (F(1,82) = 44.249, p < 0.001, η2p = 0.350). After reading high-arousal messages, participants reported higher PEOU socres than after reading low-arousal messages (Mhigh-arousal = 3.95, SD = 1.18 versus Mlow-arousal = 3.79, SD = 1.12; t = 2.03, p < 0.05). Similarly, for PU, the high-arousal emotion group exhibited significantly higher ratings than the low-arousal emotion group (Mhigh-arousal = 4.83, SD = 1.09 versus Mlow-arousal = 4.52, SD = 1.19; t = 4.10, p < 0.001). Thus, H2 is also supported.
Regarding hypothesis three, the interaction effects between arousal level and credibility degree were also found to be significant for PEOU (F = 8.365, p < 0.01, η2p = 0.098) and PU (F = 5.114, p < 0.05, η2p = 0.059). As illustrated in Figure 1 for PEOU, when facing high-credibility online opinion leaders condition, participants reported higher scores of PEOU when they read high-arousal messages compared to low-arousal messages (Mhigh-arousal = 4.004, SD = 1.069 vs. Mlow-arousal = 3.811, SD = 1.012; t = −4.961, p < 0.001). However, under the low-credibility condition, the difference in PEOU scores between high-arousal and low-arousal messages was not significant (Mhigh-arousal = 3.880, SD = 0.982 vs. Mlow-arousal = 3.809, SD = 1.012; t = −1.829, p > 0.05).
For PU (see Figure 2), the post-hoc analysis revealed that both high-credibility and low-credibility groups exhibited significant differences in performance when comparing low emotion arousal to high emotion arousal conditions. Participants with high-arousal level perceived AI as more useful compared to those with low-arousal in high-expert credibility group (Mhigh-arousal = 4.951, SD = 0.809 versus Mlow-arousal = 4.636, SD = 0.895; t = −6.493, p < 0.001). For the low-expert credibility group, the difference was also significant (Mhigh-arousal = 4.725, SD = 0.901 versus Mlow-arousal = 4.507, SD = 1.045; t = −5.167, p < 0.001). Therefore, H3 is partially supported.
The moderated mediation analysis
We conducted mediation and moderation effect analyses based on 5,000 bootstrap samples to test whether AI anxiety mediated observed differences in perception about AI technology, included AI anxiety as mediator, credibility as moderator and attitude as a covariate (see Figure 3), along with credibility dummy coded: low-credibility = 0, high-credibility = 1 (Hayes, 2013, PROCESS Model 8).
The moderated mediation models show significant results. With PEOU as outcome variable (see Table 2), the whole model was significant (F(5, 2409) = 20.4377, p < 0.001, R2 = 0.041). The interaction of arousal of messages and credibility of online opinion leaders had significantly positive effect on AI anxiety (B = 0.334, t = 2.9898, p < 0.01). The simple slope analysis showed that (see Figure 4), for low-credibility online opinion leaders’ group, arousal level of messages had a significantly positive effect on participants’ AI perception (simple slope = 0.7116, t = 8.9010, p < 0.001). For high-credibility group, the positive effect of arousal level on AI anxiety increased (simple slope = 1.0455, t = 13.1372, p < 0.001). Furthermore, the indirect effect of AI anxiety at different degrees of credibility was significant (indirect effect = −0.0321; 95% confidence interval [CI]: [−0.0586, −0.0094]). However, we found that the effect of AI anxiety was suppressing effect (see Table 3).
When it comes to PU, the whole model was significant (F(5, 2,417) = 38.6365, p < 0.001, R2 = 0.074), the moderating effect of credibility was significant (see Table 4). The interaction effect of arousal and credibility on AI anxiety was also significant (B = 0.317, t = 2.8134, p < 0.01). Moreover, at different degrees of credibility, the indirect (suppressing) effect of AI anxiety was significant [indirect effect = −0.0255; 95% confidence interval (CI): (−0.0493, −0.0063)] (see Table 5). The result proved that H4 is not supported.
Conclusion and discussion
We acknowledge that the impact of emotional communication on users goes far beyond the level of perception. In terms of communication effectiveness, emotional communication displays distinct characteristics across various media contexts. Offline attitudes towards the same event are more rational and evenly distributed than online attitudes, while online attitudes are prone to polarization (Yong et al., 2016). Through extensive research on Weibo, it has been discovered that emotionally charged Weibo content is more in line with users’ cognitive patterns and thus more likely to attract audiences to share (Zhao and Tu, 2012). This study investigates how online opinion leaders on Weibo influence public perceptions of AI technologies through two key dimensions: (1) their expert status in AI-related fields, and (2) the emotional arousal levels of their messages. Drawing on the Technology Acceptance Model (TAM), we measured perception outcomes through two core components: perceived usefulness (PU) and perceived ease of use (PEOU). Furthermore, the research incorporates users’ emotional disposition by examining the mediating role of AI anxiety throughout the cognitive evaluation process.
The research demonstrates that varying levels of emotional arousal significantly influence users’ perceptions of AI technology. Furthermore, the expert status of online opinion leaders exerts a measurable impact on shaping these perceptions. Notably, AI anxiety of users were found to exert a suppressive effect throughout this cognitive evaluation process.
The impact of online opinion leaders’ expert credibility on user’s perception
The results indicate that the expert credibility of online opinion leaders has an impact on users, whether it pertains to their PEOU or PU. In the diffusion of controversial technological innovations, experts influence the rate and extent of acceptance by serving as opinion leaders (Leonard-Barton, 1985). Contrary to expectations, negative messages disseminated by high-expert credibility opinion leaders were found to paradoxically enhance users’ perceptions of AI technology as being more user-friendly and beneficial, compared to messages from low-expert credibility opinion leaders.
Users have developed a state of distrust toward experts in specialized fields or individuals with high credibility, especially as attacks on experts via social media have been proven to negatively impact their credibility (Gierth and Bromme, 2020). When people are mistrustful, they spontaneously activate associations that are incongruent with the given message (Schul et al., 2004). This inconsistency arising from distrust has once again been confirmed in our experiments. Meanwhile, online opinion leaders increase the speed of the information stream and the adoption process itself (Eck et al., 2011). Thus, it enhances users’ exposure to such information, thereby increasing the likelihood of their perceptions and adoption, irrespective of their attitudes towards the information.
Effect of emotional arousal level
Experimental evidence confirms that negative messages with heightened emotional arousal levels significantly increase the likelihood of users perceiving AI technology as more advanced and user-friendly, thereby elevating scores on the Technology Acceptance Model (TAM) scale. This phenomenon may be attributed to the intensification of negative information, which prompts users to adopt a more cautious stance, thereby cultivating a more conservative perspective. When individuals make decisions, various factors may lead to cautious shift or risky shift (Stoner, 1961). The concept of group polarization in most studies belongs to the “risky shift” and assumes that negative, high-arousal emotions may lead to irrational group polarization (Liao et al., 2023). This suggests that negative emotions characterized by high-arousal levels might typically elicit a “risky shift” in behavior. However, our experimental findings indicate the possibility of a “cautious shift.” However, while high-arousal emotions tend to attract more attention from users, however, user are prone to consciously reject such information due to their emotions and subsequently gather arguments to refute it (Kunda, 1990). Higher affective intensity provokes motivated reasoning, which in turn leads to opinion polarization (Asker and Dinas, 2019), reinforcing user’s commitment to their own stances.
Secondly, our research findings demonstrate that within the context of negative tone, high-arousal emotion serves to elevate user’s vigilance and prompt them to adopt a more cautious attitude when assessing messages. Consequently, as the intensity of arousal associated with negative information escalates, the persuasive effectiveness of the information diminishes progressively. These discoveries offer a certain degree of supplementation to previous research endeavors.
Interaction effect on user’s perception
The results of the interaction effect show that, at the level of PEOU, the interaction between the expertise credibility of online opinion leaders and emotional arousal has a significant impact on users’ perception regarding AI technology. When users receive information from high-credibility experts, high-arousal negative content actually makes users perceive AI technology as easier to use. However, this difference is not pronounced when users receive information from non-experts. This has been attributed to the influence of anti-intellectualism (White, 1962). When we harness modern technology to serve us, we are also implicitly consenting to, complying with, or being compelled to accept various rules set by modern technology at both the cognitive and value levels (Ge, 2023). This can elicit feelings… Especially when users receive information from so-called ‘experts’, this anti-intellectual sentiment may intensify, resulting in a paradoxical effect where the more users receive highly arousing negative information from these experts, the stronger their positive perceptions become. Since PEOU influences both PU and attitude toward using (Davis, 1989), users’ attitude toward AI technology may be jointly influenced by the expertise credibility of online opinion leaders and the level of emotional arousal, representing the combined effect of anti-intellectual sentiment and cautious bias.
For PU, the interaction effect further demonstrates that, unlike PEOU which is influenced by the credibility of online opinion leaders, user’s perception of usefulness of AI technology is also affected by the degree of emotional arousal of these leaders, even when they lack professional credentials. Specifically, as the level of negative emotional arousal of online opinion leaders increases, user’s PU also rises, meaning they are more inclined to believe that AI technology is more useful. In other words, in terms of PU, users are less influenced by the credibility of the source but more by emotions. Research shows that PU affects behavior intention (Yong et al., 2016). It can be stated that the level of arousal of negative emotions primarily influences the perception aspects that are more behavior-related, while its impact on attitude intention is not confirmed.
The moderating effect of online opinion leaders’ expert credibility degree on users’ AI anxiety
The study finds that expert online opinion leaders can, to a certain extent, reduce users’ levels of AI anxiety, especially when the emotional arousal of the negative information they convey is low. Specifically, the first half of the influence of information’s emotional arousal (see Figure 3) on users’ perception of AI technology through their level of AI anxiety is affected by the expertise credibility of online opinion leaders: regardless of the expertise credibility of online opinion leaders from whom the negative information originates, the information will reduce users’ positive perception of AI technology, including PEOU and PU, by increasing their level of AI anxiety. However, when the information comes from non-experts, this effect is more pronounced at low levels of emotional arousal compared to when the source is experts. As the level of emotional arousal increases, this effect tends to converge and gradually intensifies from non-experts to experts. The impact of the interplay between the expertise credibility of online opinion leaders and the emotional arousal of information on users’ AI anxiety can be ranked as follows (see Figure 5).
Overall, high-credibility online opinion leaders tend to have a lighter impact on user’s level of AI anxiety compared to low-credibility online opinion leaders. Moreover, when the public places trust in risk managers and experts, communication flows more smoothly; conversely, in the absence of trust, communication encounters greater challenges (Fessenden-Raden et al., 1987). Yudkowsky (2008) suggested that AI may raise global risks, including the risk of super AI destroying humans and the risk of the rapid evolution of AI. When confronted with such risks, the public is unlikely to place trust in online ‘experts’.
The suppressing effect of AI anxiety
The results of the moderated mediation analysis reveal that users’ AI anxiety levels exert a suppressing effect on the relationship between the emotional arousal intensity conveyed by online opinion leaders and users’ technological perception.
After receiving information from online opinion leaders with varying degrees of expert credibility, users experience an increase in their AI anxiety, which subsequently reduces their positive perception of AI technology, making them more inclined to believe that the AI technology has become less user-friendly and more difficult to use. Negative affective states, particularly those of high arousal, may prime anxiety-related schemas through automatic affective priming (Fazio et al., 1986). This process limits attentional resources to threat-relevant cues (Easterbrook, 1959), resulting in a narrowed cognitive scope that amplifies risk perceptions.
Technophobia, similar to AI anxiety, as a reaction to the interaction between human’s internal crises and the negative effects of technology, is not merely a result of a negative emotional response; it also encompasses the subject’s active coping process (Wang, 2024), and technophobia also manifests as technology anxiety and technology stress (Lei and Baohua, 2014). As user’s level of AI Anxiety intensifies, their assessment of AI technology’s usefulness and ease of use diminishes, indicating that the subject may be coping with the potential fear induced by AI technology through reducing their acceptance of it. Research has already proved that the emotions of communicators in cyberspace can influence the emotions of information recipients, further impacting their subsequent behaviors.
The anxiety audience experience when they communicate with others usually is based on negative expectations (Gudykunst and Nishida, 2001). One of the behavioral consequences of anxiety is avoidance (Stephan and Stephan, 1985). After user’s AI Anxiety intensifies, it causes them to avoid perceiving AI. While AI Anxiety is a comprehensive concept encompassing various negative emotions such as fear, anxiety, worry, and more. Under increasing levels of fear, there’s no increase in acceptance of beliefs about the proper type of toothbrush to use (Janis and Feshbach, 1953) and stop smoking (Leventhal and Niles, 1964). Indeed, fear related to AI does cause user to diminish their perception of AI technology’s usefulness and ease of use. Based on this, users may decrease their behavioral intentions towards AI technology. Research has shown that some minimal amount of fear is necessary for behavior change, but that further increases in fear do not affect change (Leventhal and Niles, 1965). Most of the above research findings are in the health field, but our experiment has broadened the field to sociotechnical imaginaries. As a comprehensive negative emotional concept, which factor among them is the most influential remains to be further explored. However, our study takes AI Anxiety as a mediator variable, providing a new integrated perspective for revealing the mechanism of how expert and online opinion leaders’ emotional communication influences users’ perception.
Based on these findings, science communication practitioners should manage the emotional tone in AI discussions: First, fully utilize the “expert credibility paradox effect”—negative information delivered by experts can enhance users’ perception of technology ease of use; Second, implement graded emotional arousal modulation by matching arousal intensity to different scenarios to avoid irrational cautious shifts triggered by high-arousal emotions; Finally, establish a “trust savings mechanism”—accumulate social capital through routine transparent communication, and immediately provide concrete action plans when monitoring shows user anxiety exceeds critical thresholds, converting anxiety into engagement momentum.
Limitations and directions for future research
The study employed an experimental design with participants recruited exclusively from university settings using simulated Weibo platform materials. This approach resulted in sample size limitations and experimental findings whose generalizability should be interpreted cautiously. Given the constrained sampling frame (Chinese university students) and platform-specific stimuli (Weibo simulations), extrapolations to broader populations or different social media platforms require careful consideration. Consequently, future research should extend these findings through rigorous cross-cultural validation across diverse user demographics.
This research focuses on the theme of AI technology, a field that is experiencing rapid development and significantly impacting today’s society. However, since emotional communication can be applied to many other fields, and emotion includes more than just anxiety (which was the focus of this study), future research is encouraged to investigate whether these findings can be extended to other areas or to explore additional diverse outcomes. Furthermore, future exploration may focus more intently on the mechanisms underlying the various dimensions of emotional communication within social networks. Emotional communication is closely related to several pivotal theories; therefore, based on different theoretical perspectives, future studies can be expanded into different dimensions of emotional communication.
In addition, as this study pointed out, negative expressions about AI technology from online opinion leaders with expertise credibility actually increased users’ PEOU and PU of AI technology. This contradicts traditional research, which suggests that online opinion leaders with professional authority are more likely to influence users to accept their viewpoints. The underlying reasons for this phenomenon may be the distinctive nature of AI-related issues, a possible decline in the group influence of expert online opinion leaders in contemporary society, or even the resistance effect triggered by decreasing trust. Therefore, more empirical studies on different topics are needed in the future to further examine the expertise credibility and degree of online opinion leaders, and this may vary across different social contexts. Furthermore, future research could explore other moderating variables of online opinion leaders beyond credibility, such as likability and familiarity.
Finally, this study also examines the mediating role of AI anxiety. This concept shares some similarities with technophobia, as both focus on users’ negative emotions. However, from the perspective of emotional communication mechanisms, both concepts are relatively broad and contain a wide range of different emotions. Therefore, future research could delve into some specific emotions such as fear and boredom, which would help further clarify the role of AI anxiety or technophobia in communication. Specifically, the study examines the different properties of online opinion leaders and the emotional mechanisms involved. Additionally, considering the role of users’ own emotions, the analysis incorporates AI anxiety.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://osf.io/qc3z5/?view_only=40f9f326008c45cd903ad49de34bcf2c.
Author contributions
WL: Formal analysis, Writing – original draft, Data curation, Conceptualization, Validation, Methodology, Investigation. YJ: Methodology, Conceptualization, Writing – review & editing, Validation. WD: Funding acquisition, Project administration, Resources, Writing – review & editing, Supervision. AT: Software, Data curation, Writing – review & editing.
Funding
The author(s) declare that financial support was received for the research and/or publication of this article. The research was supported by the open project the Emotional Communication Mechanism of Online Opinion Leaders and the Impact on Audience Attitude and Behavior (41004400/002) of the Open Research Fund of the Shanghai Key Laboratory of Brain-Machine Intelligence for Information Behavior under Grant.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The authors declare that no Gen AI was used in the creation of this manuscript.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Asker, D., and Dinas, E. (2019). Thinking fast and furious: emotional intensity and opinion polarization in online media. Public Opin. Q. 83, 487–509. doi: 10.1093/poq/nfz042
Bai, S., and Xiao, B. (2011). Affection mobilization of the Sina microbloggers. J. Lanzhou Univ. 39, 60–68.
Baumeister, R. F., Bratslavsky, E., Finkenauer, C., and Vohs, K. D. (2001). Bad is stronger than good. Rev. Gen. Psychol. 5, 323–370. doi: 10.1037/1089-2680.5.4.323
Besley, J. C., and Hill, D. Science and Technology: public attitudes, knowledge, and interest. Science and Engineering Indicators 2020. National Science Foundation (2020). Available online at: https://files.eric.ed.gov/fulltext/ED612113.pdf
Bradley, M. M., and Lang, P. J. (1994). Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 25, 49–59. doi: 10.1016/0005-7916(94)90063-9
Cave, S., Coughlan, K., and Dihal, K., ""Scary robots" examining public responses to AI." Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. (2019).
Cheng, T. C. E., Lam, D. Y. C., and Yeung, A. C. L. (2006). Adoption of internet banking: an empirical study in Hong Kong. Decis. Support. Syst. 42, 1558–1572. doi: 10.1016/j.dss.2006.01.002
Choi, S., Lee, C.-j., Park, A., and Lee, J. A. (2024). How the public makes sense of artificial intelligence: the interplay between communication and discrete emotions. Sci. Commun. 47, 553–584. doi: 10.1177/10755470241297664
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. Manag. Inf. Syst. Q. 13, 319–340. doi: 10.2307/249008
Easterbrook, J. A. (1959). The effect of emotion on cue utilization and the organization of behavior. Psychol. Rev. 66, 183–201. doi: 10.1037/h0047707
Eck, V., Peter, S., Jager, W., and Leeflang, P. S. H. (2011). Opinion leaders' role in innovation diffusion: a simulation study. J. Prod. Innov. Manag. 28, 187–203. doi: 10.1111/j.1540-5885.2011.00791.x
Fähnrich, B., and Lüthje, C. (2017). Roles of social scientists in crisis media reporting: the case of the German populist radical right movement PEGIDA. Sci. Commun. 39, 415–442. doi: 10.1177/1075547017715472
Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., and Kardes, F. R. (1986). On the automatic activation of attitudes. J. Pers. Soc. Psychol. 50:229. doi: 10.1037/0022-3514.50.2.229
Feng, G. C. (2024). Effects of narratives and information valence in digital headlines on user responses. Asian J. Commun. 34, 156–177. doi: 10.1080/01292986.2024.2317308
Fessenden-Raden, J., Fitchen, J. M., and Heath, J. S. Providing risk information in communities: factors influencing what is heard and accepted Sci. Technol. Hum. Values 12 94–101 (1987). Available online at: https://www.jstor.org/stable/689388
Fu, X., and Li, Q.. "Ideological and political education of minority students based on network information security model." 2022 international conference on educational innovation and multimedia technology (EIMT 2022). Atlantis Press, (2022).
Ge, Y. (2023). How did technology experts lose public trust?—the technological origins of Western anti-intellectualism and contemporary reflections. Studi. Philos. Sci. Technol. 40, 116–121.
Gerber, A. J., Posner, J., Gorman, D., Colibazzi, T., Yu, S., Wang, Z., et al. (2008). An affective circumplex model of neural systems subserving valence, arousal, and cognitive overlay during the appraisal of emotional faces. Neuropsychologia 46, 2129–2139. doi: 10.1016/j.neuropsychologia.2008.02.032
Gierth, L., and Bromme, R. (2020). Attacking science on social media: how user comments affect perceived trustworthiness and credibility. Public Underst. Sci. 29, 230–247. doi: 10.1177/0963662519889275
Goldenberg, J., Lehmann, D. R., Shidlovski, D., and Barak, M. M. (2006). The role of expert versus social opinion leaders in new product adoption. Mark. Sci. Inst. Rep. 6, 67–84.
Goldsmith, R. E., Lafferty, B. A., and Newell, S. J. (2000). The impact of corporate credibility and celebrity credibility on consumer reaction to advertisements and brands. J. Advert. 29, 43–54. doi: 10.1080/00913367.2000.10673616
Gudykunst, W. B., and Nishida, T. (2001). Anxiety, uncertainty, and perceived effectiveness of communication across relationships and cultures. Int. J. Intercult. Relat. 25, 55–71. doi: 10.1016/S0147-1767(00)00042-0
Hassanein, K., and Head, M. (2007). Manipulating perceived social presence through the web interface and its impact on attitude towards online shopping. Int. J. Hum.-Comput. Stud. 65, 689–708. doi: 10.1016/j.ijhcs.2006.11.018
Hayes, A. F. (2013). Mediation, moderation, and conditional process analysis. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. 16, 12–20.
Houck, A. M., King, A. S., and Taylor, J. B. (2025). The effect of experts on attitude change in public-facing political science: scientific communication on term limits in the United States. Public Underst. Sci. 34, 19–37. doi: 10.1177/09636625241246084
Humm, C., Schrögel, P., and Leßmöllmann, A. (2020). Feeling left out: underserved users in science communication. Media Commun. 8, 164–176. doi: 10.17645/mac.v8i1.2480
Janis, I. L., and Feshbach, S. (1953). Effects of fear-arousing communications. J. Abnorm. Soc. Psychol. 48, 78–92. doi: 10.1037/h0060732
Jasanoff, S. (2015). “Future imperfect: science, technology, and the imaginations of modernity” in Dreamscapes of modernity: sociotechnical imaginaries and the fabrication of power, 1–33.
König, L., and Jucks, R. (2019). Influence of enthusiastic language on the credibility of health information and the trustworthiness of science communicators: insights from a between-subject web-based experiment. Interact. J. Med. Res. 8:e13619. doi: 10.2196/13619
Kunda, Z. (1990). The case for motivated reasoning. Psychol. Bull. 108, 480–498. doi: 10.1037/0033-2909.108.3.480
Lei, Z., and Baohua, X. (2014). The structure and formation model of technophobia. J. Dialectics Nat. 36, 70–127.
Leonard-Barton, D. (1985). Experts as negative opinion leaders in the diffusion of a technological innovation. J. Consum. Res. 11, 914–926. doi: 10.1086/209026
Leventhal, H., and Niles, P. (1964). A field experiment on fear arousal with data on the validity of questionnaire measures 1. J. Pers. 32, 459–479. doi: 10.1111/j.1467-6494.1964.tb01352.x
Leventhal, H., and Niles, P. (1965). Persistence of influence for varying durations of exposure to threat stimuli. Psychol. Rep. 16, 223–233. doi: 10.2466/pr0.1965.16.1.223
Li, J., and Huang, J.-S. (2020). Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory. Technol. Soc. 63:101410. doi: 10.1016/j.techsoc.2020.101410
Liao, S., Cheng, J., and Yu, J. (2023). User interaction and group polarization in online opinion expression: emotions as a mediating variable. Chin. J. J. Commun. 45, 91–117.
Lidskog, R., Berg, M., Gustafsson, K. M., and Löfmarck, E. (2020). Cold science meets hot weather: environmental threats, emotional messages and scientific storytelling. Media Commun. 8, 118–128. doi: 10.17645/mac.v8i1.2432
Lupton, D. (2017). ‘Download to delicious’: promissory themes and sociotechnical imaginaries in coverage of 3D printed food in online news sources. Futures 93, 44–53. doi: 10.1016/j.futures.2017.08.001
Naskar, D., Singh, S. R., Kumar, D., Nandi, S., and Rivaherrera, E. O. d. l. (2020). Emotion dynamics of public opinions on twitter. ACM Trans. Inf. Syst. 38, 1–24. doi: 10.1145/3379340
Niemann, P., Bittner, L., Schrögel, P., and Hauser, C. (2020). Science slams as edutainment: a reception study. Media Commun. 8, 177–190. doi: 10.17645/mac.v8i1.2459
Pew Research Center. (2017). Science news and information today. Available online at: https://www.pewresearch.org/journalism/2017/09/20/science-news-and-information-today/
Pielke, R. A. Jr. (2007). The honest broker: making sense of science in policy and politics : Cambridge University Press. Minerva 46, 485–489.
Quinlan, A. (2021). The rape kit’s promise: techno-optimism in the fight against the backlog. Sci. Cult. 30, 440–464. doi: 10.1080/09505431.2020.1846696
Rogers, E. M., and Cartano, D. G. (1962). Methods of measuring opinion leadership. Public Opin. Q. 26, 435–441. doi: 10.1086/267118
Rosen, L. D., Sears, D. C., and Weil, M. M. (1987). Computerphobia. Behav. Res. Methods Instrum. Comput. 19, 167–179. doi: 10.1016/0747-5632(94)00021-9
Rozin, P., and Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personal. Soc. Psychol. Rev. 5, 296–320. doi: 10.1207/S15327957PSPR0504_2
Russell, J. A., and Mehrabian, A. (1977). Evidence for a three-factor theory of emotions. J. Res. Pers. 11, 273–294. doi: 10.1016/0092-6566(77)90037-X
Schul, Y., Mayo, R., and Burnstein, E. (2004). Encoding under trust and distrust: the spontaneous activation of incongruent cognitions. J. Pers. Soc. Psychol. 86, 668–679. doi: 10.1037/0022-3514.86.5.668
Stephan, W. G., and Stephan, C. W. (1985). Intergroup anxiety. J. Soc. Issues 41, 157–175. doi: 10.1111/j.1540-4560.1985.tb01134.x
Stoner, J. A. F. (1961). A comparison of individual and group decisions involving risk. Dissertation: Massachusetts Institute of Technology.
Taddicken, M., and Reif, A. (2020). Between evidence and emotions: emotional appeals in science communication. Media Commun. 8, 101–106. doi: 10.17645/mac.v8i1.2934
Taddicken, M., and Wolff, L. (2020). ‘Fake news’ in science communication: emotions and strategies of coping with dissonance online. Media Commun. 8, 206–217. doi: 10.17645/mac.v8i1.2495
Van Kleef, G. A., Van Doorn, E. A., Heerdink, M. W., and Koning, L. F. (2011). Emotion is for influence. Eur. Rev. Soc. Psychol. 22, 114–163. doi: 10.1080/10463283.2011.627192
Visentin, M., Pizzi, G., and Pichierri, M. (2019). Fake news, real problems for brands: the impact of content truthfulness and source credibility on consumers’ behavioral intentions toward the advertised brands. J. Interact. Mark. 45, 99–112. doi: 10.1016/j.intmar.2018.09.001
Wang, B. (2024). Technophobia: traceability, evaluation and values. Chengdu: Sichuan People's Publishing House, 2024.
White, M.. Reflections on anti-intellectualism. Daedalus (1962): 457–468. Available online at: https://www.jstor.org/stable/20026723
Yong, L., Mengsi, C., and Kai, Z. (2016). Influencing factors and differences of the online and offline emotional spread of social network users: "female driver beaten by a male driver in Chengdu" case as an example. J. Intelligence 6, 80–85.
Yudkowsky, E. (2008). “Artificial intelligence as a positive and negative factor in global risk” in Global catastrophic risks. 1:184.
Zhao, Y., and Tu, L. (2012). The functions of music in media convergence product: an exploratory experiment on electronic magazine. J. Int. Commun. 7, 65–71.
Zhou, Q., and Ning, Y. (2020). Arousal, valence, and dominance: reconstructing the path of political communication on twitter under the influence of emotions. Modern Commun. 42, 53–59.
Keywords: online opinion leader, expert credibility, AI anxiety, emotional communication, perception
Citation: Liu W, Jiang Y, Deng W and Tan A (2025) Expertise and emotion: how online opinion leaders shape public perceptions of AI—among university students in China. Front. Commun. 10:1640957. doi: 10.3389/fcomm.2025.1640957
Edited by:
Ataharul Chowdhury, University of Guelph, CanadaReviewed by:
Wei Fang, Beijing Information Science and Technology University, ChinaMichael Christian, University of Bunda Mulia, Indonesia
Copyright © 2025 Liu, Jiang, Deng and Tan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Weijia Deng, ZHdqZGRAc2hpc3UuZWR1LmNu