- Department of Communication Sciences and Public Relations, Faculty of Philosophy and Social-Political Sciences, “Alexandru Ioan Cuza” University of Iași, Iași, Romania
The rapid integration of artificial intelligence (AI) technologies into academic and professional practice has profoundly shaped the communication and public relations domain. However, there is limited empirical understanding of how individuals actually perceive and use these new technologies. Therefore, this article investigates Communication and Public Relations (PR) students’ perceptions of artificial intelligence (AI), specifically aiming to understand how literacy, psychological factors, and trust influence their AI word-of-mouth (WOM). We proposed a theoretical model, which we tested using a quantitative approach with SmartPLS analysis on data gathered from 402 online questionnaires administered to students across three major Romanian universities. The key findings indicate that Internet use fosters AI literacy, which subsequently enhances both AI self-efficacy and the perceived ease of AI use. Crucially, higher AI literacy leads to greater trust, promoting informed choices. While AI self-efficacy encourages appropriate reliance on the technology and positively impacts behavioral intention toward AI, higher trust surprisingly leads to lower privacy concerns. Conversely, heightened privacy concerns increase algorithm aversion, which in turn negatively impacts both behavioral intention and WOM communication about AI. We also confirm that a positive behavioral intention is a strong predictor of increased WOM communication. These findings have significant implications for academia, policymakers, and PR practitioners by highlighting the necessity of boosting AI literacy to mitigate aversion and foster responsible AI adoption among future communication professionals.
1 Introduction
We are currently living in the “AI era” (Davenport and Ronanki, 2018), a period marked by an unprecedented democratization of tools utilized in both professional and daily activities. This transformative context directly influences strategic disciplines such as Public Relations (PR). According to the definition adopted by the Public Relations Society of America (2025), Public Relations represents a strategic communication process that builds mutually beneficial relationships between organizations and their publics. Within this landscape, Artificial Intelligence (AI) is specifically defined as “the ability of a system to identify, interpret, make inferences, and learn from data to achieve predetermined organizational and societal goals” (Mikalef and Gupta, 2021). Consequently, the integration of AI capabilities into PR’s strategic communication process is becoming a central area of study and practical application.
AI is mainly used in Communication and Public Relations (PR), healthcare, customer support (virtual assistants and chatbots), and various industries, from transportation to blockchain or the Internet of Things. AI is the most transformative technology nowadays. Users are concerned about possible misinformation in AI results (76%) and the overall quality of AI-generated content (Maheshwari, 2024), as well as fears regarding job loss. Despite these worries, 65% of people trust businesses that use AI technology (Maheshwari, 2024), highlighting the unique nature of our current moment, which is characterized by fascination, experimentation, anxiety (Teng et al., 2022), and growing familiarity with AI. This context strongly affirms the need for AI regulation. For instance, the European Union’s Artificial Intelligence Act (European Commission, 2021) addresses issues related to transparency, accountability, and ethics. In June 2023, this draft was changed to ban some problematic actions, such as the biometric surveillance, causing various reactions from different stakeholders.
The number of AI users is growing exponentially (Statista, 2024). In Romania, a study conducted on 40,000 respondents, aged between 18 and 55, living in the urban environment, with access to the Internet, showed that the term artificial intelligence is known, and it is mainly associated with robots, ChatGPT, computers, or technology. 59% of respondents affirmed that they used a virtual assistant to request information. A well-known tool that has given a glimpse into this field—ChatGPT—is used for personal purposes, especially for information (84%), education (34%), and professional tasks (32%) (Cult Research, 2023).
In an increasingly AI-mediated society, AI literacy is becoming a core skill for contemporary citizens, enabling them to understand, critically assess, and responsibly engage with intelligent technologies that shape everyday life (Kong et al., 2025; Bilbao Eraña and Arroyo-Sagasta, 2025). AI literacy is not just a technical set of competences, but it refers to individuals’ capacity to engage with AI across personal, academic, and professional contexts, including their ability to collaborate with AI systems and critically assess AI-generated outputs (Long and Magerko, 2020). Containing both conceptual understanding and real-world application, AI literacy enables users to employ AI for tasks such as problem solving and informed decision-making. Integrating AI literacy into educational practice is essential also for preparing a workforce able to leverage AI in socially beneficial ways. However, the effective adoption of AI hinges not only on knowledge but also on individuals’ confidence in their ability to use AI meaningfully. Only when individuals perceive themselves as competent and empowered to apply AI to real-life challenges will they be prepared to capitalize on AI’s potential (Kong et al., 2025).
In this context, AI education remains crucial (Biagini, 2025; Dabbagh et al., 2025; Luckin, 2025; Msambwa et al., 2025). The plethora of literacies that a contemporary citizen must manage is in perpetual growth: print, television, information, digital, media, and computer literacies. To use AI-based technologies correctly, efficiently, and ethically, individuals must manage various specific skills and knowledge. Thus, Artificial Intelligence Literacy (AIL) represents the emergent and pressing form of literacy that users must develop nowadays. Although we are living in an exceptionally dynamic period for AI research, the actual use of AI technologies—and the perceptions of their benefits and risks among specific populations—remains insufficiently studied. Understanding people’s reactions to emerging technologies is a prerequisite for research and interventions. Furthermore, conducting analyses across distinct AI user groups is essential for contextualizing the impact of AI literacy and for designing strategies that are responsive to the specific needs of these groups (Pinski and Benlian, 2024). Our study aims to fill this gap by investigating the AI perceptions of students that study Public Relations and Communication Sciences at three Romanian universities. The students represent the socio-demographic group that is specially targeted, considering that they are preparing to enter the labor market, and, as we know, in the communication industry, AI is already widely used. This paper poses the following research questions: How are students coping with the new changes brought by the AI-powered technologies? What is the role of AI literacy in trusting the results and increasing self-efficacy? What are the concerns regarding the use of AI tools? Are there negative effects that arise from the behavioral intention toward AI?
In the next section, we briefly review some relevant studies regarding the studied phenomenon and emphasize the main theoretical concepts. Then, based on the literature review, we formulate the hypotheses, and we describe the research design. We discuss the results using the state-of-the-art, evaluating the value of the proposed model, identifying its limitations, and outlining future research directions.
2 Theoretical framework
When examining communication research from a metatheoretical perspective, we must note a series of challenging shifts. Communication can no longer rely on paradigms built exclusively around human—human interaction. As AI systems increasingly generate, adapt, and interpret messages, the longstanding separation between communication theory and AI research (Guzman and Lewis, 2020) has become untenable. Classical models fail to capture scenarios in which AI exercises communicative agency—autonomously producing recommendations, initiating exchanges, and interacting on behalf of users (Gunkel, 2012; Endacott and Leonardi, 2022). Such capabilities fundamentally recast technology from a passive conduit to an active participant in meaning-making. This shift demands a reconceptualization of communication’s functional, relational, and metaphysical foundations and calls for theoretical models that recognize the inherently social nature of human–machine interaction (Guzman and Lewis, 2020). Thus, communication theory must keep pace with the disruptive effects of AI, which provides channels and platforms, and becomes an assistant or a partner in communication. The emergence of Artificial Intelligence-Mediated Communication (AI-MC) (Hancock et al., 2020) underscores this need, as AI now routinely shapes communicative behavior through tools such as predictive text, translation systems, and voice assistants (Goldenthal et al., 2021; Li et al., 2023). Together, these developments make clear that communication theory must urgently integrate AI’s expanding role in shaping contemporary communicative practices.
Against this paradigmatic background, our paper is mainly grounded in a few theoretical approaches that have the necessary explanatory power for the proposed model. In this respect, the Technology Acceptance Model (TAM) asserts that user behavior (i.e., ease of use and usefulness) is determined by the intention to use a particular technology (Davis, 1989). Fishbein and Ajzen (1975) emphasized the importance of how individuals perceive the ease of use, the advantages, and the credibility of technology in the formation of the attitude and behavior toward that technology. The way in which an individual perceives AI will construct his attitude toward AI, as the Theory of Planned Behavior asserts. Uses and Gratification Theory represents the fruitful framework when we examine the users’ motivations for interacting with AI to fulfill their tasks (Baek and Kim, 2023) and their various needs: cognitive (i.e., when they seek information), affective (i.e., when they engage in emotional or parasocial interaction with chatbots or AI assistants), and entertainment needs (Shao and Kwon, 2021). Not all interactions are as expected, and not everyone is an early trend responder. Furthermore, the Innovation Resistance Theory (Ram and Sheth, 1989) helps us understand not just the practical challenges, such as access, usage, or risks when people use AI for communication, but also the mental blocks that can arise from differing views and beliefs. Also, the CASA (Computers as Social Actors) approach (Gambino et al., 2020) explains that human-machine communication is eminently social: people could not stop themselves from applying the same norms and expectations when they communicate with a chatbot or AI as they apply to interpersonal human relationships.
3 Research model and hypotheses
3.1 AI literacy and internet use
Artificial intelligence literacy refers to “the ability to properly identify, use, and evaluate AI-related products under the premise of ethical standards” (Wang et al., 2022). Four pillars are essential for AIL: awareness, usage, evaluation, and ethics. AIL represents a complex ability that encompasses not only awareness of diverse AI tools but also the proficiency to apply them effectively for personal and professional tasks; it involves using AI pragmatically within an ethical framework while continuously considering individual and collective rights and responsibilities; and it requires comprehension of AI technology alongside the critical evaluation of the results produced by AI. AIL cannot be equated with a skill-based technique and, like most technological literacies, reclaims deliberation and critical thinking. An “overarching conceptual framework” of AIL was built by considering the three key facets of AI: autonomy, inscrutability, and learning, alongside three areas: learning methods, components, and effects (Pinski and Benlian, 2024). AIL is a holistic “enabling construct” that encompasses multifarious proficiency dimensions related to knowledge, awareness, skills, abilities, and experience (Pinski and Benlian, 2024). As Laupichler et al. (2022) noted in their scoping literature review on AIL in higher and adult education, a widely used definition considers AIL “a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace” (Long and Magerko, 2020, p. 2). At least 17 core competencies are listed (Long and Magerko, 2020) as important by adults from higher education regarding what AI is, what it can do, how it actually works, how it should be used, and how it is perceived: recognizing artifacts that use AI, understanding the concept of “intelligence,” acknowledging the intrinsic interdisciplinarity of intelligent machines, differentiating between general and narrow AI, identifying AI’s strengths and weaknesses, imagining future AI applications and their consequences, understanding AI representations and providing examples, describing how AI operates and makes decisions, understanding machine learning steps, recognizing the human role in AI, possessing prerequisite data literacy, learning from and critically interpreting data, comprehending AI reasoning processes and its capacity to act, identifying and understanding sensors, having an ethical perspective, and recognizing that AI agents are programmable. Artificial Intelligence-Mediated Communication (AI-MC) literacy is a subcategory of AIL, and it refers to “a user’s level of familiarity (as a proxy for understanding), comfort, and confidence (as a proxy for skill) with individual forms of AI-MC, and with AI-MC tools as a subset of AI technology” (Goldenthal et al., 2021). AI-MC literacy is positively connected to AI-MC adoption. Because AI-MC has serious variations from individual to individual, lower levels of understanding and comfort using AI-MC could be serious barriers to developing higher AI-MC literacy levels.
Individuals with physical access to information and communication technologies are more expected to recognize and use various AI tools. Even if access is just one condition (necessary but not sufficient) for developing AI skills, the Internet connectivity and use represent a stimulus for a wider range of activities that could be done online. Digital competencies and personal motivation improve the evaluation of the outcomes of AI-based technologies (Vodă et al., 2022). Convenient access to digital technologies determines a profound engagement with AI tools (Celik, 2023; Dabija and Vătămănescu, 2023).
University students represent a critical target for the AI market: on the one hand, they must quickly and efficiently acquire advanced competencies for the labor market, which is increasingly dominated by AI tools; on the other hand, not all academic specializations yet include dedicated courses for this purpose, even though all students, regardless of their field of study, will encounter situations involving AI use in both their professional and personal lives (Hornberger et al., 2023). Despite the growing importance of AI, its incorporation into educational curricula has largely been confined to STEM disciplines (Southworth et al., 2023; Ng et al., 2021). STEM students tend to have higher levels of AI literacy compared to their peers, whereas healthcare students exhibit lower levels of AI comprehension (Hornberger et al., 2023). AI education should extend beyond these fields to encompass broader societal needs, equipping all students with knowledge ranging from technical AI training to ethical considerations. It is important for students to graduate with the AI knowledge and skills needed to succeed in the 21st-century workforce, but is this aim currently being achieved? Moreover, the heterogeneous composition of students’ backgrounds should not be underestimated, as informal learning serves as a resource in this regard, with a positive correlation between the frequency of engagement in AI-related informal contexts and AIL (Hornberger et al., 2023; Pinski and Benlian, 2024). Furthermore, the current state of AI literacy among university students remains largely unclear (Hornberger et al., 2023). The bias that simply belonging to the digital-native generation automatically leads to strong digital competencies (Vodă et al., 2022) can also be extended to AIL. However, the literature indicates (Hornberger et al., 2023) that university students in this group face significant problems, including an inability to explain what AI is, a weak understanding of how it works, and several misunderstandings, such as believing that AI possesses human-like qualities or functions like the human brain.
Given these considerations, we propose the following hypothesis:
H1: The internet use of PR and Communication Sciences students has a positive influence on their AI literacy.
3.2 AI self-efficacy and the perceived ease of use of AI
Self-efficacy, or the expectation of personal efficacy, represents the personal belief in the capacity of performing a certain thing, task, or behavior (Hooda et al., 2022). Self-efficacy determines many subsequent behaviors and attitudes, such as “whether coping behavior will be initiated, how much effort will be expended, and how long it will be sustained in the face of obstacles and aversive experiences” (Bandura, 1977, p. 191). The beliefs that users have about their skills to use AI tools effectively and perform various tasks show their AI self-efficacy. AI self-efficacy is a “holistic perception” defined as individuals’ general belief in their ability to use and interact with AI (Wang and Chuang, 2024). The literature presents mixed results regarding the relationship between AIL and the intention to continue using AI: both a positive and a negative association have been reported (Pinski and Benlian, 2024). Users with stronger self-efficacy perceptions about using AI tools will show a greater intention to use AI. Also, negative results could be considered, because greater task efficiency resulted in an increased “perceived creepiness” of AI (Baek and Kim, 2023). When people see AI as a helpful toolbox that could increase their productivity, the tendency to use it increases. Also, when the AI results are very strong and accurate or when it resolves difficult and complex tasks that overwhelm the human possibility, people find AI uncanny and develop, consequently, mixed feelings about its power and use. In a competitive academic environment, students must proficiently complete various tasks within sometimes tight deadlines. In this context, AI tools can serve as valuable solutions to enhance their performance. Students who believe they can perform a specific task, even when faced with challenges, are less likely to use AI tools for completing academic activities on their behalf, whereas those with high self-efficacy are more likely to use AI tools to meet their need for interaction with others (Rodríguez-Ruiz et al., 2025).
The ease of use refers to the degree to which individuals believe or perceive how easy or effortless it is to use a particular technology (Uzir et al., 2023). Therefore, both acceptance and resistance to using AI technologies depend on the way the user perceives that AI tools or products will be handled and managed (easily or hardly). Perceived ease of use is a significant factor that shapes the intention to use AI technology and positively influences its perceived usefulness. If a person thinks AI is not a struggle to use, the chances to implement it for everyday tasks and activities are growing.
Therefore, we formulate the following hypotheses:
H2: The AI literacy of PR and Communication Sciences students positively influences their AI self-efficacy.
H3: The AI literacy of PR and Communication Sciences students positively influences the perceived ease of AI use.
H4: The perceived ease of AI use exerts a positive influence on PR and Communication Sciences students’ AI self-efficacy.
3.3 Behavioral intention toward AI
Intention represents a person’s location in a subjective probability dimension involving a relationship between himself and some action. A behavioral intention, therefore, refers to a person’s subjective probability that they will perform some behavior” (Fishbein and Ajzen, 1975). Human behavior is viewed as a reasoned action that results from a behavioral intention (Fishbein and Ajzen, 2010). A person is more likely to perform certain behaviors that are perceived to produce positive outcomes, that are normatively desirable, and that involve controllable behavioral processes. Perceived technology literacy predicts effort expectancy in e-learning contexts (Mohammadyari and Singh, 2015) and learning intention. Self-efficacy was the most important factor that directly predicted primary school students’ behavioral intentions toward AI (Chai et al., 2021). Also, for medical students, the behavioral intention had a significantly strong and positive impact on actual learning and was significantly predicted by personal relevance of AI, subjective norms, and perceived self-efficacy of learning AI tools (Li et al., 2022). Thus, the next postulated hypothesis is:
H5: The PR and Communication Sciences students’ AI self-efficacy positively influences their behavioral intention toward AI.
3.4 Trust in AI results and privacy concerns
Trust in technology represents an important domain of research in the field of human-computer interaction (Lankton et al., 2015). Trust represents the belief that AI agents’ responses, recommendations, and decisions are reliable and credible (Shin, 2021). Conversational AI’s features increase the level of user trust (Baek and Kim, 2023), emphasizing the relevance of CASA theory. The anthropomorphic characteristics of chatbots positively influence the perceived trust in AI chatbots (Cheng et al., 2022), which also positively influences the usage intention and user engagement (Mostafa and Kasamani, 2022). Trust in AI is also correlated with higher levels of perceived AI performance and use intention (Cheng et al., 2022). Literacy interventions can significantly recalibrate users’ reliance on machine learning models: participants become more discerning, relying on the models when appropriate but withholding trust when the model outputs appear uncertain or potentially flawed (Chiang and Yin, 2022). Literacy does not simply increase trust; rather, it promotes appropriate and calibrated reliance, thereby improving human–AI decision-making dynamics (Chiang and Yin, 2022). Trust also has a major influence on students’ engagement with AI technologies (Nazaretsky et al., 2025).
Privacy is defined as consumers’ right to confidentiality and control over their personal information (Gurung and Raja, 2016). Privacy concern refers to consumers’ uncertainty about potential loss due to a lack of privacy of their personal information in the online environment (Alzaidi and Agag, 2022). Users’ worries about the possible misuse of their private data result in a feeling of discomfort (Rajaobelina et al., 2021). At the same time, there is a positive relationship between privacy concerns and the perceived “creepiness of chatbots” (Dekkal et al., 2024), due to their resemblance to human agents. Privacy concerns constitute a major barrier to technology acceptance and use, including AI (Alzaidi and Agag, 2022; Acosta and Reinhardt, 2022), and have a significant negative influence on people’ passion to use personal digital assistants (Maduku et al., 2023). Perceived privacy violations represent a major perceived obstacle for students in the adoption of AI-EdTech and diminishes the trust in AI (Nazaretsky et al., 2025). Therefore, we formulate the following hypotheses:
H6: The PR and Communication Sciences students’ AIL has a positive influence on their trust in AI results.
H7: The PR and Communication Sciences students’ trust in AI results exerts a negative influence on their privacy concerns regarding AI.
3.5 Algorithm aversion and WOM regarding AI
Despite a set of rational reasons for appreciating algorithms, such as their superior performance, distrust of algorithms is a specific attitude of rejecting or not relying on algorithms in specific activities or decision processes (Kawaguchi, 2021). When a person and an algorithm make the same mistake, people tend to lose confidence in algorithms faster than in humans (Liu et al., 2023). When people do not follow algorithms that perform better than humans (referred to as a “behavioral anomaly”), their expected utility and self-efficacy may decrease (Filiz et al., 2021). In this vein, the algorithm aversion is a “behavior of discounting algorithmic decisions with respect to one’s own decisions or others’ decisions, either consciously or unconsciously” (Mahmud et al., 2022). The lack of familiarity with algorithms leads to a higher aversion to them, while familiarity with them could be a “double-edged sword dilemma” (Mahmud et al., 2022), with different outcomes. The familiarity with algorithms could result in algorithm acceptance (Fenneman et al., 2021) or in negative experiences (i.e., errors or algorithmic bad decisions).
WOM is an informal form of communication that acts as a driving force in shaping perceptions, decisions, and actions (Jo, 2023; Allen and Choudhury, 2022). People are less willing to share negative word-of-mouth after a service failure caused by an AI recommendation system, in contrast to a human employee, despite there being no difference in the failure or dissatisfaction with the deficiency (Huang and Philp, 2021). Comparing university students and office workers in relation to ChatGPT use (Jo, 2023), the utilitarian benefits (such as time and cost savings) are more relevant for office workers when recommending AI services or products to other people. WOM recommendations among university students are driven by the behavioral intention: if students have a strong will to use AI, they are more likely to share positive WOM about it.
Therefore, we propose the following hypotheses:
H8: The PR and Communication Sciences students’ privacy concerns regarding AI positively influence their algorithm aversion.
H9: The algorithm aversion of PR and Communication Sciences students negatively impacts their behavioral intention toward AI.
H10: The algorithm aversion of PR and Communication Sciences students exerts a negative influence on their word-of-mouth regarding AI.
H11: The behavioral intentions of the PR and Communication Sciences students toward AI positively impact their word-of-mouth about the technology.
Figure 1 depicts the hypothesized model.
4 Methodology
4.1 Research design and research context
This study used a quantitative approach in the form of a questionnaire-based survey. The investigation was cross-sectional because we assessed the variables using only one sample of subjects, specifically PR and Communication Sciences students from three Romanian universities. The choice of research design is justified by the fact that the purpose of the study and our research objectives is to test the relationships between the variables in the conceptual model. This study aims to investigate the factors that influence the spread of information about AI (Word-of-Mouth regarding AI). The authors’ decision to concentrate their investigation on Romania was based on the fact that the country is undergoing a rapid digital transformation and increasing AI adoption across professional sectors, including PR and communication, making it crucial to understand how students (i.e., future professionals) perceive these tools. Additionally, there is an empirical gap in understanding how students from Central and Eastern Europe, particularly those in Romania, perceive this phenomenon, which suggests a potential contribution to our study.
4.2 Sample
PR and Communication Sciences students from three Romanian universities comprised the study participants. We selected Communication and Public Relations students because they represent a population for whom AI technologies are becoming increasingly integral to both academic training and future professional practice. Their coursework frequently engages with digital communication theories and tools, making them a highly relevant group for examining AI literacy, self-efficacy, trust, and subsequent AI adoption behaviors. Moreover, these students are expected to act as early adopters and evaluators of technological innovations within organizational and media environments, which positions them as an appropriate population for predicting word-of-mouth intentions regarding AI tools. Using a relatively homogeneous cohort also strengthens internal validity by reducing variance unrelated to the constructs under investigation. For these reasons, Communication and PR students constitute a theoretically and practically justified sample for the aims of this study.
We employed a non-probabilistic sample technique, namely, the snowball. The decision was based on the characteristics of the studied population of future public relations practitioners. After the initial check of the database for quality, we obtained 402 valid responses. Incomplete questionnaires, meaning those that did not meet the study’s criteria or lacked control questions, were excluded. In terms of demographic information, most of the participants were female (76.4%; n = 402), had ages between 19 and 23 years (64.9%; n = 402), reported low monthly income (less than 400 euros) (53.7%; n = 402), graduated from high school (54%; n = 402), and were employed in communication, public relations, or marketing (52%; n = 402).
4.3 Data collection
Data was collected the first part of the 2024 year (January–March). The invitation to participate was distributed online via email and social media (e.g., Facebook groups of students). The initial section of the questionnaire provided a description of the study and included an online consent form that reinforced the confidentiality and anonymity of the responses and stipulated that participation was voluntary. Additionally, we utilized filter questions to collect relevant information regarding the respondents’ occupations and whether they are currently studying communication sciences. To help respondents understand the AI tools for online communication, we gave easy-to-understand definitions and explanations of AI from Mikalef and Gupta (2021), described the different areas of AI (Zawacki-Richter et al., 2019), and showed how intelligence-mediated communication works according to Hancock et al. (2020), along with examples of AI functions and tools used in communication.
We ensured the protection of the participants’ rights by complying with national laws, specifically Law 677/2001 and Law 206/2004, which regulate ethical standards in scientific research in Romania. For this study, we did not collect any personal data that could potentially disclose the identities of the participants, including their identification numbers, physical characteristics, physiological traits, psychological attributes, economic conditions, cultural backgrounds, or social characteristics.
4.4 Questionnaire design and measures
We built the online questionnaire using validated scales from the literature that we adapted for our investigation. We measured Internet Use (IUS) by asking respondents to rate their agreement with six statements (e.g., “I use the Internet services for browsing online”), which we adapted from Hills and Argyle (2003). We assessed AI Literacy (AL) by inviting respondents to rate their agreement with three assertions (e.g., “I can distinguish between smart devices and non-smart devices”), which we adapted from Wang et al. (2022). The Algorithm aversion (AA) was assessed by requiring participants to report the extent to which they agreed to three statements (e.g., “In online communication, I will decide by myself rather than follow the decision given by AI tools”), adapting the scale from Mahmud et al. (2022). We measured AI Self-Efficacy (SEAI) by asking respondents how much they agreed with three statements, such as “I am certain that I can work effectively on different tasks in my online interactions with algorithmic platforms.” In this vein, we modified the scale from Shin et al. (2022). Behavioral intention toward AI (BI) was measured by asking respondents the extent to which they agreed to four statements (e.g., “In the future, I plan to access online communication apps and websites based on artificial intelligence more often”), adapted from Nagy and Hadjú (2021). Perceived Ease of Use of AI (PEU) was evaluated by asking respondents the extent to which they agreed to four items (e.g., “AI-powered online communication apps and websites are easy to use”), adapting the scale developed by Nagy and Hadjú (2021). Privacy concerns regarding AI (PCAI) were assessed by asking respondents the extent to which they agreed to two statements (e.g., “The personal information disclosed on the AI-driven communication platform is subject to many threats”), modified from Shin et al. (2022). We measured Trust in AI Results (TAIR) by asking respondents to indicate their level of agreement with three items (e.g., “I trust the recommendations by algorithm-driven platforms in online communication”), which we adapted from Shin et al. (2022). We evaluated Word-of-Mouth by asking respondents to indicate their level of agreement with two statements (e.g., “I recommend others to use artificial intelligence (AI)-based online communication apps and websites”), which we adapted from Uzir et al. (2023). All the 5-point Likert-type scales ranged from 1 (strongly disagree) to 5 (strongly agree) and were reflective. We conducted a pilot test with 40 respondents to guarantee that the items were easily comprehensible to the participants prior to administering the questionnaire. We also provided the participants with a draft of the online questionnaire, which included all of the items, and we requested that they review the form of the statements carefully. Participants highlighted any ambiguous terms or sentences and offered an alternative. The objective of pre-testing was to assess their language comprehension and detect any potential instrumental errors. Based on the feedback we received, we made necessary changes. Table 1 indicates that the scales exhibited good psychometric properties in terms of reliability and validity.
4.5 Data analysis procedure
We conducted descriptive and inferential statistics to evaluate the data collected during the investigation. We derived the descriptive statistics for the measured variables, including means and standard deviations, using the SPSS software, version 23. For the purposes of assessing the hypotheses, we implemented structural equation modeling (SEM) with Smart PLS 4 during the inferential phase of our investigation. We selected the PLS-SEM method due to its advantages, which include its superiority over covariance-based SEM in terms of robustness to collinearity and data distribution, as well as its capacity to explain complex relationship models and simultaneously explain multiple statistical relationships between each construct and the hypotheses (Uzir et al., 2023).
5 Results
5.1 The evaluation of the measurement models
We assessed the measurement models in the preliminary phase using the structural equation modeling function of SmartPLS 4.0. We evaluated each reflecting construct from the conceptual model for its internal consistency and validity. Table 1 presents the item loadings, reliability statistics, average variance extracted (AVE), and variance inflation factor (VIF) values. The loadings satisfy a minimum threshold of 0.70 (Hair et al., 2010), demonstrating the existence of convergent validity across all assessed items. We assessed reliability using Cronbach’s α, with a criterion of 0.7 or above deemed acceptable for confirmatory analysis (Henseler and Sarstedt, 2013). All reliability scores are above 0.7, therefore validating the model’s internal consistency. Also, all AVE values over 0.5 indicate a satisfactory model and confirm the convergent validity of the constructs (Chin, 1998). Composite reliability (CR) is considered satisfactory when composite values are above 0.7 (Hair et al., 2010). Furthermore, we evaluated the collinearity among the items in the measurement model. The dataset exhibits no multicollinearity problems since the maximum VIF value is 3.024 (BI3 item), which is below the threshold of 3.3 given by Hair et al. (2017). Therefore, the presence of common method bias (CMB) was not a concern. We also checked if each construct was different from the others using the Fornell-Larcker and Heterotrait-Monotrait methods. According to the Fornell-Larcker standards (Hair et al., 2010; Henseler and Sarstedt, 2013), the Average Variance Extracted (AVE) for each latent variable must exceed the correlation coefficient between the given variable along with all of the other variables (Table 2).
To ascertain that the concepts are not conceptually identical, we employed the Heterotrait–Monotrait criterion. The values for all the concepts in this study are below the important threshold of 0.9 (Henseler et al., 2015), which confirms that the concepts are distinct from each other (Table 3).
5.2 The evaluation of the structural models
To be able to conduct an extensive assessment of the structural model, it was necessary to investigate the collinearity of the constructs. The highest VIF value found in the inner model is 1.157 (PEU → SEAI), which is below the limit, indicating that there is no multicollinearity among the variables. The goodness of fit of the saturated model is also acceptable. The square root mean residual (SRMR) has a value of SRMR = 0.063, which fulfills the recommended criteria of < 0.08 (Hair et al., 2017).
Besides, Internet Use explains 6.8% of the variance of AI Literacy (R2 = 0.068), while AI Literacy explains 5.9% of the variance of (R2 = 0.059) Trust in AI Results and 31.7% (R2 = 0.317) of the variance of Perceived Ease of Use of AI. Trust in AI Results explains 4.3% (R2 = 0.043) of the variance of Privacy Concerns regarding AI, while Privacy Concerns regarding AI explain 1.7% (R2 = 0.017) of the variance of Algorithm Aversion. Furthermore, 23.9% (R2 = 0.239) of the variance in AI Self-Efficacy is explained by Perceived Ease of Use of AI and AI Literacy. Also, 27.8% (R2 = 0.278) of the variance in Behavioral Intention toward AI is being explained by AI Self-Efficacy and Algorithm Aversion. Finally, 50.9% (R2 = 0.509) of Word-of-Mouth regarding AI is explained by Algorithm Aversion and Behavioral Intention toward AI (see Figure 2).
As shown in Table 4, all 11 hypotheses (H1–H11) were empirically validated.
Hypothesis 1 posited that the internet use of PR and Communication Sciences students has a positive influence on their AI literacy. The outcomes (β = 0.261; t-value = 4.638; p < 0.000) confirm the existence of a positive effect. According to Cohen’s (1988) interpretation, the effect values of β ranging from 0.10 to 0.29 are classified as small, those from 0.30 to 0.49 as medium, and effect sizes of 0.50 or more are deemed large. Therefore, we can argue that the predictor has a small positive effect on the predicted variable; thus, H1 is supported.
Hypothesis 2 proposed that the AI literacy of PR and Communication Sciences students positively influences their AI self-efficacy. The results of the analysis (β = 0.343; t-value = 4.807; p < 0.000) indicate the existence of a positive, medium effect; hence, H2 is supported.
Hypothesis 3 postulated that the AI literacy of PR and Communication Sciences students positively influences the perceived ease of use of AI. The findings (β = 0.563; t-value = 11.329; p < 0.000) confirm the presence of a positive, large effect, providing support for hypothesis H3.
Hypothesis 4 assumed that the perceived ease of use of AI exerts a positive influence on PR and Communication Sciences students’ AI self-efficacy. The outcomes of the examination (β = 0.205; t-value = 3.031; p < 0.002) indicate the occurrence of a positive and significant small effect; thus, H4 is supported.
Hypothesis 5 presumed that the PR and Communication Sciences students’ AI self-efficacy positively influences their behavioral intentions toward AI. The study insights (β = 0.468; t-value = 8.586; p < 0.000) indicate the presence of a moderately positive impact that supports H5.
Hypothesis 6 asserted that the PR and Communication Sciences students’ AI literacy has a positive influence on their trust in AI. The outcomes (β = 0.243; t-value = 3.435; p < 0.001) confirm the existence of a small positive effect; hence, H6 is supported.
Hypothesis 7 proposed that the PR and Communication Sciences students’ trust in AI results exerts a negative influence on their privacy concerns regarding AI. The findings (β = −0.208; t-value = 3.250; p < 0.001) confirm the presence of a negative, small effect, providing support for hypothesis H7.
Hypothesis 8 posited that the PR and Communication Sciences students’ privacy concerns regarding AI positively influence their algorithm aversion. The results (β = 0.130; t-value = 2.223; p < 0.026) pinpoint the existence of a positive, small, yet statistically significant effect; thus, hypothesis H8 is validated.
Hypothesis 9 postulated that the PR and Communication Sciences students’ algorithm aversion exerts a negative influence on their behavioral intention toward AI. The analysis outcomes (β = −0.157; t-value = 2.689; p < 0.007) indicate a small negative effect; thus, H9 is supported.
Hypothesis 10 stated that the PR and Communication Sciences students’ algorithm aversion exerts a negative influence on their word-of-mouth regarding AI. The results of the analysis (β = −0.147; t-value = 3.419; p < 0.001) confirm the occurrence of a small yet statistically significant negative effect; thus, H10 is supported.
Hypothesis 11 assumed that the PR and Communication Sciences students’ behavioral intention toward AI has a positive influence on their word-of-mouth regarding AI. The study results (β = 0.660; t-value = 17.383; p < 0.000) pinpoint a large positive effect, providing support for hypothesis H11.
6 Discussion
The research model posits that Internet use facilitates the development of AI literacy, which in turn positively influences AI self-efficacy, namely, students’ perceived capacity to perform AI-related tasks and handle various challenges that may arise in this process. Possessing a certain level of AI literacy impacts how respondents approach different AI-related situations and contexts, fostering a high level of confidence in their ability to successfully complete these tasks. The result, according to which the AI literacy of PR students shapes their AI self-efficacy, is convergent with Hornberger et al.’ study on the AI literacy among university students in Germany. They found that their respondents have only a foundational level of understanding of AI, but AI literacy is related to self-efficacy, interest in AI, and attitude toward AI (Hornberger et al., 2023). In their nationwide survey in Canada, Teng et al. (2022) showed that health care students felt unprepared and uneducated about AI, which may have contributed to their fear and anxiety over this topic. As we can notice, while AIL positively correlates with AI self-efficacy and performance, the lack of it is associated with uncertainty, lack of confidence, AI inefficacy, and even anxiety.
AI literacy positively influences students’ trust in AI-generated outcomes, as being AI literate entails not only acquiring knowledge about AI but also interpreting data, critically evaluating it, and making informed decisions accordingly. The literature (Chiang and Yin, 2022) supports the positive impact of AIL on appropriate trust. AI self-efficacy also contributes to an appropriate reliance on AI (Chiang and Yin, 2022) as well as an appropriate delegation behavior toward AI (Pinski et al., 2023; Pinski and Benlian, 2024; Chai et al., 2021). Pinski and Benlian (2024) observed that while some studies suggest that task performance improves with behavioral change; others have not found any significant effect, highlighting the need for further empirical research in this area.
The AI literacy of PR students positively influences the perceived ease of use of AI, while the perceived ease of use of AI exerts a positive influence on PR and Communication Science students’ AI self-efficacy. Students with perceived AI literacy are more likely to consider AI-related tasks or AI-mediated communication (AI-MC) as easily manageable, unlike those who lack or do not perceive themselves as possessing the necessary AI competencies to complete such tasks. Our study showed that PR and Communication Science students’ trust in AI results has a negative influence on their privacy concerns regarding AI. Nazaretsky et al. (2025) also highlighted significant ethical concerns about AI use among students, which intensify as their understanding of AI deepens. This trend was especially noticeable among Master’s students in Computer and Communication Sciences compared to STEM Bachelor’s students. Their findings suggest that merely increasing AI knowledge does not necessarily reduce ethical concerns. Another finding of our study is that the students’ privacy concerns shaped their aversion to algorithms, which in turn negatively influenced their behavioral intention toward AI and their word-of-mouth communication. Fears related to privacy breaches and the misuse of personal data increase the likelihood of developing algorithm aversion, limiting the students’ intention to use AI tools and applications. Algorithm aversion must be studied more deeply and nuanced within this socio-demographic category, because the literature is still scarce to have an understanding of this phenomenon among students. In general, aversion to AI can be triggered by multiple factors: lack of familiarity with AI, overestimation of one’s own skills and knowledge, and being an expert (Mahmud et al., 2022). Individuals with task familiarity might be less inclined to engage in positive word-of-mouth, as their higher confidence in decision-making reduces their reliance on external validation (Allen and Choudhury, 2022). Finally, we also found that PR and Communication Science students with a behavioral intention toward AI were more willing to talk about AI. This result matches what Jo (2023) found, showing that students’ willingness to use AI greatly influenced how much they talked about it, helping to explain the patterns of discussion for this group.
7 Conclusion
7.1 Theoretical contributions
The AI literacy landscape is currently highly fragmented (Pinski and Benlian, 2024), with one major issue being the problematic solidity of the concept itself, especially when it must be rigorously delineated from related terms such as digital literacy. While literacy is always in process, continuously evolving, a well-grounded AI specificity is not merely a matter of scientific vocabulary but of consciously measuring its complexity. The imprecise, implicit, or vague use of the term does not contribute to the advancement of this field. Our study aimed to provide a clearer depiction of this concept for a specific group of students. PR and Communication Science students have an urgent need for AI literacy, particularly as most of them fall into the two major categories of users described in the literature: non-expert AI users (primarily in their personal lives) and expert AI users (as a job requirement) (Pinski and Benlian, 2024). The effects of AI literacy remain one of the least studied aspects in the academic literature (Hornberger et al., 2023; Pinski and Benlian, 2024), and our study aimed to contribute to a better understanding of its consequences and correlations within a group that is at the forefront of this phenomenon. Promoting AI literacy (Nazaretsky et al., 2025) remains a necessary conduct for equipping students with the optimal knowledge, competences, and critical thinking for leveraging AI in a safe and productive manner.
7.2 Practical implications
Our findings also have some practical effects. The academic community, the educational policymakers, and the practitioners can comprehensively understand the factors shaping students’ “orientation” in an AI ecosystem that becomes prevalent. The tech-savvy generations are not proficient per se, but they need a solid AI literacy foundation to cope with the complex challenges of their personal and professional lives. University professors may find in our study’s results significant elements that contribute to a more nuanced understanding of students’ perceptions regarding some of the key factors influencing AI usage, such as self-efficacy, trust, or privacy concerns. In this regard, incorporating AI-related topics into courses or increasing the number of specialized AI courses could enhance knowledge, competencies, and critical thinking when assessing AI products. Educational policymakers can design action plans and long-term campaigns focused on digital and AI literacy. Furthermore, employers and industry specialists could use our study’s data to identify solutions for better collaboration between academia and the labor market, ultimately facilitating students’ employability. Understanding how students perceive AI efficacy, trust, or their intention to use AI, provides valuable insights for developing well-structured educational and professional programs.
7.3 Limitations and future research directions
The present study had inherent limitations. The study’s primary limitation is the use of a regional sample, specifically students learning PR and Communication Science from only three Romanian universities. These characteristics may limit the generalizability of the results to other national, cultural, or educational contexts. While this methodological option represents a limitation, it also serves as a strong starting point for future research, which could be conducted on students from different universities in other countries and continents. We acknowledge that numerous individual differences, as well as variations in academic organizational culture and national contexts, may impact such research in diverse ways. Therefore, diversifying samples can ensure more objective comparisons and add value to the understanding of this phenomenon. The self-reported perception of the variables could also be a limitation of our study, and other studies could find relevant objective measures for the variables included in our model.
Future research could introduce additional individual psychological variables as well as institutional factors specific to each university (such as AI-related courses or the use of AI tools in classrooms). Further studies might also explore whether AI literacy depends more on formal education or on informal ways of acquiring information and skills. This model could also be applied to other socio-professional categories, in various contexts, and across time. Finally, although the proposed model indicates relationships between the studied variables, it cannot definitively establish causality or track changes over time because of the study’s cross-sectional nature. Additionally, a longitudinal study could shed light on the differences that emerge within this model and the factors that drive them.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Author contributions
D-RO: Visualization, Validation, Resources, Conceptualization, Project administration, Data curation, Formal analysis, Writing – review & editing, Investigation, Funding acquisition, Methodology, Writing – original draft, Supervision, Software. CG: Formal analysis, Visualization, Resources, Writing – original draft, Funding acquisition, Project administration, Methodology, Data curation, Supervision, Validation, Investigation, Writing – review & editing, Conceptualization, Software. I-AG: Writing – original draft, Supervision, Writing – review & editing, Software, Formal analysis, Investigation, Funding acquisition, Project administration, Data curation, Methodology, Visualization, Validation, Resources, Conceptualization.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Acosta, L. H., and Reinhardt, D. (2022). A survey on privacy issues and solutions for voice-controlled digital assistants. Pervasive Mob. Comput. 80:101523. doi: 10.1016/j.pmcj.2021.101523
Allen, R. T., and Choudhury, P. (2022). Algorithm-augmented work and domain experience: the countervailing forces of ability and aversion. Organ. Sci. 33, 149–169. doi: 10.1287/orsc.2021.1554
Alzaidi, M. S., and Agag, G. (2022). The role of trust and privacy concerns in using social media for e-retail services: the moderating role of COVID-19. J. Retail. Consum. Serv. 68:103042. doi: 10.1016/j.jretconser.2022.103042
Baek, T. H., and Kim, M. (2023). Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telemat. Inform. 83:102030. doi: 10.1016/j.tele.2023.102030
Bandura, A. (1977). Self-efficacy: toward a unifying theory of behavioral change. Psychol. Rev. 84, 191–215. doi: 10.1037/0033-295X.84.2.191,
Biagini, G. (2025). Towards an AI-literate future: a systematic literature review exploring education, ethics, and applications. Int. J. Artif. Intell. Educ. 35, 2616–2666. doi: 10.1007/s40593-025-00466-w
Bilbao Eraña, A., and Arroyo-Sagasta, A. (2025). Fostering AI literacy in pre-service teachers: impact of a training intervention on awareness, attitude and trust in AI. Front. Educ. 10:1668078. doi: 10.3389/feduc.2025.1668078
Celik, I. (2023). Exploring the determinants of artificial intelligence (AI) literacy: digital divide, computational thinking, cognitive absorption. Telemat. Inform. 83:102026. doi: 10.1016/j.tele.2023.102026
Chai, C. S., Lin, P. Y., Jong, M. S. Y., Dai, Y., Chiu, T. K., and Qin, J. (2021). Perceptions of and behavioral intentions towards learning artificial intelligence in primary school students. Educ. Technol. Soc. 24, 89–101. Available online at: https://www.jstor.org/stable/27032858
Cheng, X., Zhang, X., Cohen, J., and Mou, J. (2022). Human vs. AI: understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Inf. Process. Manag. 59:102940. doi: 10.1016/j.ipm.2022.102940
Chiang, C. W., and Yin, M.. 2022. Exploring the effects of machine learning literacy interventions on lay people’s reliance on machine learning models. In Proceedings of the 27th International Conference on Intelligent User Interfaces (IUI ʹ22), March 22–25, 2022, Helsinki, Finland. ACM, New York, NY, USA. doi: 10.1145/3490099.3511121
Chin, W. W. (1998). “The partial least squares approach to structural equation modeling” in Modern methods for business research. ed. G. A. Marcoulides (New York: Psychology Press), 295–336.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. 2nd Edn. Lawrence Hillsdale, NJ: Lawrence Erlbaum Associates.
Cult Research. 2023. Impactul AI și al Asistenților Virtuali în România. Available online at: https://cultresearch.ro/2023/12/07/tehnologia-ai-si-asistentii-virtuali-au-devenit-parte-integranta-a-societatii-digitale-majoritatea-celor-care-folosesc-chatgpt-cauta-informatii-pe-diverse-teme-sau-subiecte/
Dabbagh, H., Earp, B. D., Mann, S. P., Plozza, M., Salloch, S., and Savulescu, J. (2025). AI ethics should be mandatory for schoolchildren. AI Ethics 5, 87–92. doi: 10.1007/s43681-024-00462-1
Dabija, D.-D., and Vătămănescu, E.-M. (2023). Artificial intelligence: the future is already here. Oecon. Copernic. 14, 1053–1057. doi: 10.24136/oc.2023.031
Davenport, T. H., and Ronanki, R. (2018). “Artificial intelligence for the real world” in HBR’S 10 MUST (Boston, Massachusetts: Harvard Business Review Press), 67–84.
Davis, F. D.. 1989. Technology acceptance model: TAM. MN Al-Suqri and AS Al-Aufi: Information seeking behavior and technology adoption 205: 5. Available online at: https://quod.lib.umich.edu/b/busadwp/images/b/1/4/b1409190.0001.001.pdf
Dekkal, M., Arcand, M., Prom Tep, S., Rajaobelina, L., and Ricard, L. (2024). Factors affecting user trust and intention in adopting chatbots: the moderating role of technology anxiety in insurtech. J. Financ. Serv. Mark. 29, 699–728. doi: 10.1057/s41264-023-00230-y
Endacott, C. G., and Leonardi, P. M. (2022). Artificial intelligence and impression management: consequences of autonomous conversational agents communicating on one’s behalf. Hum. Commun. Res. 48, 462–490. doi: 10.1093/hcr/hqac009
European Commission. 2021. Proposal for a regulation of the European Parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. Available online at: https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF
Fenneman, A., Sickmann, J., Pitz, T., and Sanfey, A. G. (2021). Two distinct and separable processes underlie individual differences in algorithm adherence: differences in predictions and differences in trust thresholds. PLoS One 16:e0247084. doi: 10.1371/journal.pone.0247084,
Filiz, I., Judek, J. R., Lorenz, M., and Spiwoks, M. (2021). Reducing algorithm aversion through experience. J. Behav. Exp. Finance 31:100524. doi: 10.1016/j.jbef.2021.100524
Fishbein, M., and Ajzen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley.
Fishbein, M., and Ajzen, I. (2010). Predicting and changing behavior: The reasoned action approach. New York: Psychology Press.
Gambino, A., Fox, J., and Ratan, R. A. (2020). Building a stronger CASA: extending the computers are social actors paradigm. Hum. Mach. Commun. 1, 71–85. doi: 10.3316/INFORMIT.097034846749023
Goldenthal, E., Park, J., Liu, S. X., Mieczkowski, H., and Hancock, J. T. (2021). Not all AI are equal: exploring the accessibility of AI-mediated communication technology. Comput. Human Behav. 125:106975. doi: 10.1016/j.chb.2021.106975
Gunkel, D. J. (2012). Communication and artificial intelligence: opportunities and challenges for the 21st century. Communication+1 1:1. doi: 10.7275/R5QJ7F7R
Gurung, A., and Raja, M. K. (2016). Online privacy and security concerns of consumers. Inf. Comput. Secur. 24, 348–371. doi: 10.1108/ICS-05-2015-0020
Guzman, A. L., and Lewis, S. C. (2020). Artificial intelligence and communication: a human–machine communication research agenda. New Media Soc. 22, 70–86. doi: 10.1177/1461444819858691
Hair, J. F., Black, W. C., and Babin, B. J. (2010). Multivariate data analysis: A global perspective. Andover: Pearson.
Hair, J. F., Celsi, M. W., Ortinau, D. J., and Bush, R. P. (2017). Essentials of marketing research. 4th Edn. Mason: Mason: McGraw Hill.
Hancock, J. T., Naamann, M., and Levy, K. (2020). AI-mediated communication: definition, research agenda, and ethical considerations. J. Comput.-Mediat. Commun. 25, 89–100. doi: 10.1093/jcmc/zmz022
Henseler, J., Ringle, C. M., and Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 43, 115–135. doi: 10.1007/s11747-014-0403-8
Henseler, J., and Sarstedt, M. (2013). Goodness-of-fit indices for partial least squares path modeling. Comput. Stat. 28, 565–580. doi: 10.1007/s00180-012-0317-1
Hills, P., and Argyle, M. (2003). Uses of the internet and their relationships with individual differences in personality. Comput. Hum. Behav. 19, 59–70. doi: 10.1016/S0747-5632(02)00016-X
Hooda, M., Rana, C., Dahiya, O., Rizwan, A., and Hossain, M. S. (2022). Artificial intelligence for assessment and feedback to enhance student success in higher education. Math. Probl. Eng. 2022, 1–19. doi: 10.1155/2022/5215722,
Hornberger, M., Bewersdorff, A., and Nerdel, C. (2023). What do university students know about artificial intelligence? Development and validation of an AI literacy test. Comput. Educ. Artif. Intell. 5:100165. doi: 10.1016/j.caeai.2023.100165
Huang, B., and Philp, M. (2021). When AI-based services fail: examining the effect of the self-AI connection on willingness to share negative word-of-mouth after service failures. Serv. Ind. J. 41, 877–899. doi: 10.1080/02642069.2020.1748014
Jo, H. (2023). Understanding AI tool engagement: a study of ChatGPT usage and word-of-mouth among university students and office workers. Telemat. Inform. 85:102067. doi: 10.1016/j.tele.2023.102067
Kawaguchi, K. (2021). When will workers follow an algorithm? A field experiment with a retail business. Manag. Sci. 67, 1670–1695. doi: 10.1287/mnsc.2020.3599,
Kong, S.-C., Korte, S.-M., Burton, S., Keskitalo, P., Turunen, T., Smith, D., et al. (2025). Artificial intelligence (AI) literacy – an argument for AI literacy in education. Innov. Educ. Teach. Int. 62, 477–483. doi: 10.1080/14703297.2024.2332744
Lankton, N. K., McKnight, D. H., and Tripp, J. (2015). Technology, humanness, and trust: rethinking trust in technology. J. Assoc. Inf. Syst. 16, 880–918. doi: 10.17705/1jais.00411
Laupichler, M. C., Aster, A., Schirch, J., and Raupach, T. (2022). Artificial intelligence literacy in higher and adult education: a scoping literature review. Comput. Educ. Artif. Intell. 3:100101. doi: 10.1016/j.caeai.2022.100101
Li, J., Chu, Y., and Xu, J. (2023). Impression transference from AI to human: the impact of AI’S fairness on interpersonal perception in AI-mediated communication. Int. J. Hum.-Comput. Stud. 179:103119. doi: 10.1016/j.ijhcs.2023.103119
Li, X., Jiang, M. Y.-C., Jong, M. S.-Y., Zhang, X., and Chai, C.-S. (2022). Understanding medical students’ perceptions of and behavioral intentions toward learning artificial intelligence: a survey study. Int. J. Environ. Res. Public Health 19:8733. doi: 10.3390/ijerph19148733,
Liu, N. T. Y., Kirshner, S. N., and Lim, E. T. (2023). Is algorithm aversion WEIRD? A cross-country comparison of individual-differences and algorithm aversion. J. Retail. Consum. Serv. 72:103259. doi: 10.1016/j.jretconser.2023.103259
Long, D., and Magerko, B.. 2020. What is AI literacy? Competencies and design considerations. Conference on human factors in computing systems. CHI.
Luckin, R. (2025). Nurturing human intelligence in the age of AI: rethinking education for the future. Dev. Learn. Organ. Int. J. 39, 1–4. doi: 10.1108/DLO-04-2024-0108,
Maduku, D. K., Mpinganjira, M., Rana, N. P., Thusi, P., Ledikwe, A., and Mkhize, N. H. B. (2023). Assessing customer passion, commitment, and word-of-mouth intentions in digital assistant usage: the moderating role of technology anxiety. J. Retail. Consum. Serv. 71:103208. doi: 10.1016/j.jretconser.2022.103208
Maheshwari, Rashi (ed.) (2024). “Top AI statistics and trends in 2024”. Reviewed by Aashika Jain, Forbes Advisor. Published: Jan 2, 2024, 2:06pm. Available online at: https://www.forbes.com/advisor/in/business/ai-statistics/
Mahmud, H., Islam, A. N., Ahmed, S. I., and Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technol. Forecast. Soc. Change 175:121390. doi: 10.1016/j.techfore.2021.121390
Mikalef, P., and Gupta, M. (2021). Artificial intelligence capability: conceptualization, measurement calibration, and empirical study on its impact on organizational creativity and firm performance. Inf. Manag. 58:103434. doi: 10.1016/j.im.2021.103434
Mohammadyari, S., and Singh, H. (2015). Understanding the effect of e-learning on individual performance: the role of digital literacy. Comput. Educ. 82, 11–25. doi: 10.1016/j.compedu.2014.10.025
Mostafa, R. B., and Kasamani, T. (2022). Antecedents and consequences of chatbot initial trust. Eur. J. Mark. 56, 1748–1771. doi: 10.1108/EJM-02-2020-0084
Msambwa, M. M., Wen, Z., and Daniel, K. (2025). The impact of AI on the personal and collaborative learning environments in higher education. Eur. J. Educ. 60:e12909. doi: 10.1111/ejed.12909
Nagy, S., and Hadjú, N. (2021). Consumer acceptance of the use of artificial intelligence in online shopping: evidence from Hungary. Amfiteatru Econ. 23, 155–173. doi: 10.24818/EA/2021/56/155
Nazaretsky, T., Mejia-Domenzain, P., Swamy, V., Frej, J., and Käser, T. (2025). The critical role of trust in adopting AI-powered educational technology for learning: an instrument for measuring student perceptions. Comput. Educ. Artif. Intell. 8:100368. doi: 10.1016/j.caeai.2025.100368,
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., and Qiao, M. S. (2021). Conceptualizing AI literacy: an exploratory review. Comput. Educ. Artif. Intell. 2:100041. doi: 10.1016/j.caeai.2021.100041
Pinski, M., Adam, M., and Benlian, A. 2023. AI knowledge: improving ai delegation through human enablement. 2023 CHI Conference on Human actors in Computing Systems (CHI’23), Hamburg.
Pinski, M., and Benlian, A. (2024). AI literacy for users–a comprehensive review and future research directions of learning methods, components, and effects. Comput. Hum. Behav. Artif. Humans 2:100062. doi: 10.1016/j.chbah.2024.100062
Public Relations Society of America. 2025. About public relations. Available online at: https://www.prsa.org/about/all-about-pr (Accessed December 10, 2025).
Rajaobelina, L., Prom Tep, S., Arcand, M., and Ricard, L. (2021). Creepiness: its antecedents and impact on loyalty when interacting with a chatbot. Psychol. Mark. 38, 2339–2356. doi: 10.1002/mar.21548
Ram, S., and Sheth, J. N. (1989). Consumer resistance to innovations: the marketing problem and its solutions. J. Consum. Mark. 6, 5–15. doi: 10.1108/EUM0000000002542
Rodríguez-Ruiz, J., Marín-López, I., and Espejo-Siles, R. (2025). Is artificial intelligence use related to self-control, self-esteem and self-efficacy among university students? Educ. Inf. Technol. 30, 2507–2524. doi: 10.1007/s10639-024-12906-6
Shao, C., and Kwon, K. H. (2021). Hello Alexa! Exploring effects of motivational factors and social presence on satisfaction with artificial intelligence-enabled gadgets. Hum. Behav. Emerg. Technol. 3, 978–988. doi: 10.1002/hbe2.293
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Hum. Comput. Stud. 146:102551. doi: 10.1016/j.ijhcs.2020.102551
Shin, D., Kee, K. F., and Shin, E. Y. (2022). Algorithm awareness: why user awareness is critical for personal privacy in the adoption of algorithmic platforms? Int. J. Inf. Manag. 65:102494. doi: 10.1016/j.ijinfomgt.2022.102494
Southworth, J., Migliaccio, K., Glover, J., Glover, J., Reed, D., McCarty, C., et al. (2023). Developing a model for AI across the curriculum: transforming the higher education landscape via innovation in AI literacy. Comput. Educ. Artif. Intell. 4:100127. doi: 10.1016/j.caeai.2023.100127
Statista (2024). Artificial intelligence - global | Statista market forecast. Available online at: https://www.statista.com/outlook/tmo/artificial-intelligence/worldwide#users
Teng, M., Singla, R., Yau, O., Lamoureux, D., Gupta, A., Hu, Z., et al. (2022). Health care students’ perspectives on artificial intelligence: countrywide survey in Canada. JMIR Med. Educ. 8:e33390. doi: 10.2196/33390
Uzir, M. U. H., Bukari, Z., Al Halbusi, H., Lim, R., Wahab, S. N., Rasul, T., et al. (2023). Applied artificial intelligence: acceptance-intention-purchase and satisfaction on smartwatch usage in a Ghanaian context. Heliyon 9:e18666. doi: 10.1016/j.heliyon.2023.e18666,
Vodă, A. I., Cautisanu, C., Grădinaru, C., Tănăsescu, C., and de Moraes, G. H. S. M. (2022). Exploring digital literacy skills in social sciences and humanities students. Sustainability 14:2483. doi: 10.3390/su14052483
Wang, Y. Y., and Chuang, Y. W. (2024). Artificial intelligence self-efficacy: scale development and validation. Educ. Inf. Technol. 29, 4785–4808. doi: 10.1007/s10639-023-12015-w
Wang, B., Rau, P.-L. P., and Yuan, T. (2022). Measuring user competence in using artificial intelligence: validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 42, 1324–1337. doi: 10.1080/0144929X.2022.2072768
Keywords: AI literacy, AI self-efficacy, algorithm aversion, perceived ease of use of AI, privacy concerns, trust in AI, word-of-mouth
Citation: Obadă D-R, Gradinaru C and Gradinaru I-A (2026) From understanding to influence: the interplay of AI literacy, self-efficacy, and trust in predicting communication students’ AI adoption and word-of-mouth. Front. Commun. 10:1722464. doi: 10.3389/fcomm.2025.1722464
Edited by:
Jonathan Matusitz, University of Central Florida, United StatesReviewed by:
Ahlam Alharbi, Imam Abdulrahman Bin Faisal University, Saudi ArabiaMazen Alzyoud, Al al-Bayt University, Jordan
Copyright © 2026 Obadă, Gradinaru and Gradinaru. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Daniel-Rareș Obadă, ZGFuaWVsLm9iYWRhQHVhaWMucm8=