Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Educ., 31 July 2025

Sec. Higher Education

Volume 10 - 2025 | https://doi.org/10.3389/feduc.2025.1559509

This article is part of the Research TopicGenerative AI Tools in Education and its Governance: Problems and SolutionsView all 12 articles

ChatGPT perceptions, experiences, and uses with emphasis on academia


Haneen AliHaneen Ali1Duha AliDuha Ali2Yasin Fatemi
Yasin Fatemi3*Suhas Sudhir BharadwajSuhas Sudhir Bharadwaj3
  • 1Department of Mechanical and Industrial Engineering, Applied Science Private University, Amman, Jordan
  • 2Department of Industrial and Manufacturing Engineering, California Polytechnic State University—San Luis Obispo, CA, United States
  • 3Department of Industrial and Systems Engineering, Auburn University, Auburn, AL, United States

Introduction: With artificial intelligence technologies such as ChatGPT becoming increasingly integrated into educational environments, understanding their influence on academic stakeholders is essential. This study investigates how exposure to ChatGPT and demographic factors shape perceptions of this emerging AI tool in higher education.

Methods: A cross-sectional survey was conducted at Auburn University, involving 761 participants including both faculty and students. The survey examined technology exposure, ChatGPT familiarity, demographic variables (e.g., gender), and overall perceptions of ChatGPT in academic contexts.

Results: The analysis revealed significant differences in exposure and attitudes toward ChatGPT, with gender emerging as a key variable. Males reported greater exposure and more favorable perceptions of ChatGPT compared to other groups. Familiarity with AI tools was positively correlated with supportive attitudes toward their use in education.

Discussion: These findings highlight the importance of AI exposure in fostering acceptance and effective utilization of tools like ChatGPT. The results suggest a need for inclusive and equitable strategies to integrate AI in academic settings, particularly for underrepresented or less-exposed groups. Further research is encouraged to guide ethical and effective AI adoption in education.

1 Introduction

Generative Artificial Intelligence (GAI) has significantly advanced over the years. The field of Artificial Intelligence (AI) emerged in the mid-20th century (Groumpos, 2023; Kaplan and Haenlein, 2019), with early efforts focused on creating intelligent machines capable of doing basic tasks such as chess and checkers-playing programs (Ding et al., 2023; Tatnall, 2012). In the 1980s, expert systems mimicked human decision-making in specific domains, employing a symbolic framework grounded in rules and heuristics such as the R1 (Xcon) program (Pandey, 2023; Zhang et al., 2023). Then, in the 1990s, based on the principles of connectionism, neural networks or Parallel Distributed Processing (PDP) emerged, revolutionizing AI by allowing it to learn from data and giving birth to sophisticated applications such as computer vision and speech recognition (Pandey, 2023; Zhang et al., 2023). The true potential of AI became apparent with the introduction of generative models, which create new content using the patterns it discovers from existing data (Pandey, 2023; Ooi et al., 2023). In recent years, deep learning techniques, particularly Generative Adversarial Networks (GANs) and Generative Pre-trained Transformers (GPT)—have propelled GAI to new heights (Abukmeil et al., 2021; Baidoo-Anu and Owusu Ansah, 2023; Hu, 2022; Jovanovic and Campbell, 2022). However, a quantum leap in GAI occurred with the development of ChatGPT by OpenAI in 2022 (Haleem et al., 2022; Patel and Lam, 2023; Rane, 2023).

Running on the GPT language model architecture, ChatGPT is a language model designed to generate human-like text and engage in natural conversations with its users (Baidoo-Anu and Owusu Ansah, 2023; Aydin and Karaarslan, 2023). GPT belongs to the family of large-scale language models that are trained using a combination of supervised and reinforcement learning techniques (Kim and Wong, 2023). ChatGPT is sophisticated enough to conduct remarkably complex tasks such as Theory of Mind (ToM) tasks and demonstrates a cognitive capability comparable to that of a 9-year-old child (Yu, 2023). By leveraging large amounts of publicly available online data through natural language processing (NLP), it can generate texts ranging from short paragraphs to long essays (Baidoo-Anu and Owusu Ansah, 2023; Aydin and Karaarslan, 2023). The more advanced GPT-3, developed by Brown et al. (2020), employs a staggering 175 billion parameters and is the driving force behind ChatGPT's advanced language generation capabilities (Baidoo-Anu and Owusu Ansah, 2023).

Since its launch in November 2022 (González-Arias and López-García, 2023; Marr, 2023), ChatGPT's success has been unprecedented, reaching a user base of 1 million in just 5 days (De Angelis et al., 2023; Roose, 2022; Walsh, 2022), a milestone that continues to set a benchmark for emerging AI tools such as DeepSeek. Major social media platforms such as Facebook, Twitter, and Instagram had considerably longer journeys, with growth trajectories spanning 300, 720, and 75 days, respectively, to reach the same milestone (Biswas, 2023). In comparison, other prominent AI tools like Google's Gemini, Anthropic's Claude, Meta's LLaMA, and DeepSeek have entered the field with their own innovations. Claude focuses on interpretability and safety via constitutional AI, while Gemini integrates multimodal capabilities, and DeepSeek emphasizes open-source language modeling with multilingual support. While these tools are advancing rapidly, ChatGPT still holds a benchmark status for user adoption speed and early academic application.

ChatGPT has been adopted across education (Baidoo-Anu and Owusu Ansah, 2023; Tate et al., 2023), engineering (Baidoo-Anu and Owusu Ansah, 2023; Qadir, 2023), journalism (Baidoo-Anu and Owusu Ansah, 2023; Pavlik, 2023), medicine (Baidoo-Anu and Owusu Ansah, 2023; O'Connor, 2023), and economics and finance (Baidoo-Anu and Owusu Ansah, 2023; Alshater, 2022; Needleman, 2023). In academia, it supports personalized learning (Baidoo-Anu and Owusu Ansah, 2023; Ray, 2023), functions as an instructor aid (Ray, 2023; Kasneci et al., 2023), and facilitates tasks such as exam preparation (Ray, 2023; Tlili et al., 2023), language coaching (Ray, 2023; Rudolph et al., 2023), and on-demand tutoring (Ray, 2023; Pardos and Bhandari, 2023). It aids in code writing (Nam and Bai, 2023; Rayner, 2023), data analysis, and STEM learning (Kim and Wong, 2023; Belland et al., 2015; Wu and Zhang, 2023), while also supporting research and writing tasks in non-STEM fields (Chan and Tsi, 2023; Jowarder, 2023). Its ability to summarize complex texts and assist with writing and editing academic documents has made it a valuable educational tool (Haleem et al., 2022; Mondal and Mondal, 2023; Bom, 2023).

Despite these benefits, concerns remain. Critics point to inaccuracies (Wu and Zhang, 2023; Ahmad Alhadi et al., 1999), ethical risks (Wu and Zhang, 2023; Katar et al., 2023; Dehouche, 2021; Sok and Heng, 2023), and the threat to critical thinking (Marr, 2023; Wu and Zhang, 2023; Chan and Tsi, 2023; Ahmad Alhadi et al., 1999). Many faculty worry students may rely on ChatGPT excessively, potentially undermining academic integrity (Lo, 2023; Mhlanga, 2023; Iqbal et al., 2022). There is growing concern about the difficulty of detecting AI-generated content and the limitations of current academic policies to regulate such use (Ju, 2023; Shoufan, 2023; Koo and Li, 2016).

Understanding user acceptance of such tools is vital. One useful framework is the Technology Acceptance Model (TAM), which explains how users come to accept and use technology. According to TAM, perceived usefulness and perceived ease of use influence behavioral intention, which in turn predicts actual usage (Davis, 1989; Venkatesh and Davis, 2000). This model provides a theoretical foundation for examining constructs like ChatGPT exposure and perception in this study, which reflect user experience and acceptance dynamics.

To that end, this study centers on four key areas: technology exposure, ChatGPT exposure, perceptions of ChatGPT, and ethical considerations. These factors were chosen because they play a critical role in shaping how individuals engage with and adopt AI tools in academic settings. By exploring these dimensions within a single university and comparing perspectives across both faculty and students, the study aims to offer practical insights that can inform more inclusive and effective AI integration in higher education.

2 Materials and methods

A cross-sectional survey, conducted from 8 August 2023 to 1 December 2023, at Auburn University, Alabama, was used to gather insights from 761 participants, comprising both faculty and students, who were selected based on their active status as university community members. They were sent personalized email invitations that offered information about the survey's purpose, the duration of participation, and a direct link to the online questionnaire designed and hosted on the widely used Qualtrics platform. The questionnaire encompassed a range of questions designed to capture diverse perspectives on relevant topics (Ali et al., 2022, 2023a). Encouragement to participate was emphasized in the invitations to enhance engagement.

Auburn University is public research university in Alabama, United States, with an enrollment of over 31,000 students and more than 1,300 faculty members across 12 colleges and schools. Auburn is designated as an R1 institution, indicating very high research activity. As of Fall 2023, no formal university-wide policy had been established regarding the use of generative AI tools such as ChatGPT in teaching, learning, or research. Instead, decisions on AI usage were generally left to individual instructors or departments. Some faculty included disclaimers in their syllabi, either restricting or allowing the use of AI, while others had no mention of it. This decentralized approach reflects the emergent nature of institutional adaptation to AI technologies and underscores the importance of this study in informing future policy development.

While various studies have assessed ChatGPT's role in education, they often focus on either students or faculty, and frequently span different institutions. By contrast, this study focuses on both faculty and students within the same university, offering a more controlled comparison across roles and generations. This approach enables a clearer understanding of institutional and demographic factors influencing AI use.

This focus is timely and important. As universities rush to develop AI policies and training programs, there is an urgent need for evidence-based understanding of how different academic stakeholders perceive and use generative AI. Faculty shape curricula and academic standards, while students are the primary users of learning technologies. Their perspectives are interdependent yet distinct. Studying them together, within the same educational and cultural setting, enables institutions to develop more coherent, inclusive, and effective AI integration strategies. This research thus aims to fill a crucial gap and support academic institutions in navigating the complexities of ethical, equitable AI adoption.

2.1 Ethical considerations

This study obtained ethical approval from the Ethics Committee of Auburn University by the Declaration of Helsinki, as indicated by the Institutional Review Board (IRB) protocol references 20–238 EX 2005. The participants were informed about the survey's aims and the associated risks on the initial survey page, which was accompanied by a consent form that outlined the voluntary nature of participation and assured that no identifiable information would be collected to protect anonymity. This study did not compensate the participants for their time and engagement. It adhered to ethical principles, emphasizing voluntary participation, transparency, and participant privacy throughout the survey process.

2.2 Questionnaire development

The survey instrument was developed using the framework proposed by Carayon et al. (2006). This process encompassed four primary stages:

1. Conceptualization & operationalization: This initial stage involved the creation of a list of domains pertinent to the research objectives, along with corresponding items for each domain.

2. Preliminary testing: Interviews and focus groups were conducted with students and faculty from various disciplines to test the initial questionnaire and gather diverse perspectives.

3. Instrument modification: Based on the insights gathered from the pilot study, the survey instrument was modified.

4. Finalization and psychometric testing: To ensure reliability and validity, the revised instrument was subjected to further psychometric testing using a larger sample.

The content of the initial survey instrument, the methods for conducting the pilot testing, and the implementation of the final version of the instrument are detailed below.

2.3 Initial survey instrument

To ensure the reliability and validity of our survey, we utilized established scales where feasible, following the guidelines of Carmines and Zeller (1979). The initial questionnaire included the following domains derived from the identified gaps in the existing literature and a conceptual framework that was developed to connect the following domains:

1. Demographic and Background Information

2. ChatGPT Exposure Assessment

3. Perceptions of ChatGPT: Utility and Academic Application

4. Ethical Perceptions Related to ChatGPT

We conducted ~ 20 h of interviews with 25 students (across STEM and non-STEM majors, at both undergraduate and graduate levels), eight faculty members (from five different departments, with teaching and research responsibilities), and three instructors (focused on teaching, from three different departments). These interviews aimed to explore ChatGPT usage in classrooms and research settings, gather perceptions regarding the exposure to and comfort levels with ChatGPT, and collect observational insights from academic environments. Based on the data gathered, a new domain was added to the instrument:

1. Demographic and Background Information

2. Technology Exposure Assessment

3. ChatGPT Exposure Assessment

4. Perceptions of ChatGPT: Utility and Academic Application

5. Ethical Perceptions Related to ChatGPT

The construction of the survey instrument involved developing a preliminary list of items for each domain, informed by the interview data and inputs from four focus groups of experts. These groups comprised three faculty members (from Engineering, Psychology, and Education), five undergraduate students, three graduate students, an instructional course designer, an instructor, and our research team. Each focus group member independently reviewed the preliminary list, providing feedback and voting on the inclusion of each item based on relevance and content clarity using a Content Validity Index (CVI). A four-point Likert scale was employed to this end (Polit and Beck, 2006). The final version of the instrument was refined for clarity. Out of the 61 items evaluated, 40 were deemed relevant, 21 removed for irrelevance, and 13 revised for clarity. A five-point Likert scale, ranging from “strongly agree” to “strongly disagree,” was used.

2.4 Pilot study

A pilot study was conducted with 50 randomly selected faculty members and students (the target population). Two weeks later, the same questionnaire was redistributed to assess consistency. The intraclass correlation coefficient (ICC) was used to evaluate the participants' measurement consistency. The pilot study revealed high internal consistency reliability (Cronbach's α = 0.95 for the overall scale), and the ICC for total scores was 0.85 (p < 0.01), indicating acceptable stability over the 2-week period (Tavakol and Dennick, 2011; Koo and Li, 2016).

2.5 Operationalization of domains

2.5.1 Technology exposure assessment

Technology exposure refers to an individual's familiarity and comfort with the use of technological tools in teaching, learning, or research. This aspect assesses the extent to which someone is acquainted with incorporating technology into educational practices and explores any ethical concerns they may have encountered in the process (Holzinger et al., 2023; Kalonde and Mousa, 2016; Oh and Reeves, 2014).

2.5.2 ChatGPT exposure assessment

ChatGPT exposure assessment refers to the systematic evaluation of individuals' familiarity, knowledge, and engagement with ChatGPT. It involves exploring participants' awareness of ChatGPT, their online searches related to the tool, their understanding of broader AI concepts, and their practical usage experiences in both academic and non-academic settings. Through a series of targeted questions, this assessment aims to measure the extent of participants' exposure to ChatGPT and their overall interactions with AI-powered technologies (Ansari et al., 2023; Kumar, 2023; Sullivan et al., 2023).

2.5.3 ChatGPT perception

This section evaluates the perceived usefulness and applications of ChatGPT in academic settings, including learning, teaching, and research. The items in this section were designed to capture the quantitative aspects of the faculty's and students' perceptions related to ChatGPT's usefulness in learning and study, applications in teaching, role in research, and general academic use.

2.5.4 ChatGPT usefulness

ChatGPT usefulness refers to the participants‘ perspectives on the practical value of ChatGPT in academic settings. It evaluates ChatGPT's potential utility in teaching/learning and research and captures individuals' beliefs about the benefits of integrating ChatGPT into educational and research contexts, as reflected in their responses to survey items gauging its perceived usefulness (Ali et al., 2023b; Baradel, 2023; Rasul et al., 2023).

2.5.5 Application in work/study

ChatGPT application for students and faculty members involves evaluating their willingness to incorporate ChatGPT, or another AI-powered tool, into their teaching or research practices. This assessment includes evaluating the participants' openness to exploring potential applications, piloting the tool in educational settings, and considering the possibility of ChatGPT replacing some current instructor-related tasks. This assessment aims to reflect individuals' attitudes toward the integration of AI technologies, emphasizing a proactive stance in adopting and experimenting with ChatGPT to enhance teaching and research endeavors within academic environments (Akiba and Fraboni, 2023; Firat, 2023; Ipek et al., 2023).

2.6 Implementation of the revised survey instrument

The final step in developing the questionnaire survey was its implementation. The revised questionnaire was converted into a web-based format using Qualtrics and emailed to the participants. The first page of the survey provided all the necessary information. By proceeding with the survey, the respondents implicitly gave their consent to participate.

3 Results

Before analysis, the data were cleaned. Participants whose questionnaire progress was < 60% were excluded, reducing the total sample from 761 to 626. Most missingness occurred due to early survey dropout rather than item-level nonresponse. For the remaining cases, missing values were minimal and were imputed using the mode for categorical data and the mean for numerical data. This straightforward approach was selected given the low proportion of missing data and its scattered nature. Reliability was then analyzed using Cronbach's alpha coefficients, and the instrument demonstrated a robust overall Cronbach's alpha of 0.80. Each construct demonstrated strong alpha values, ranging between 0.75 and 0.86 (Table 1). These results align with the accepted threshold of 0.70 (Murtagh and Heck, 2012) and (Vaske et al., 2017) and suggest a satisfactory level of internal consistency.

Table 1
www.frontiersin.org

Table 1. Reliability of domains.

Further, the descriptive statistics of the demographic variables were examined. Then, a Kruskal-Wallis test was applied to investigate the effect of these variables on domains such as technology exposure, ChatGPT exposure, ChatGPT perception, and ethical considerations.

3.1 Descriptive statistics of demographic variables

Table 2 presents the descriptive statistics of the participants. The analysis revealed that students comprised 77% of the sample (n = 483), while faculty/instructors constituted 23% (n = 143). On average, faculty/instructors completed 98.79% of the questionnaire, while students completed 96.75%. The average age was 23.38 for students and 47.03 for faculty/instructors. Among the students, 61% (n = 296) were male, 37% (n = 180) were female, and < 1% (n = 2) were non-binary. Among faculty/instructors, 49% were male (n = 70), 43% were female (n = 62), 3% were non-binary (n = 3), < 1% were gay (n = 1), and < 1% were agender (n = 1). Faculty/instructors had an average of 10.94 years of work experience at the university, while the average duration of study for students was 2.60 years. The majority of participants were white, accounting for 71% (n = 344) of students and 71% (n = 102) of faculty/instructors, followed by Asian, Black American, and Hispanic, although the specific percentages for these groups are not provided. Faculty members were distributed across various colleges, with the highest representation in Liberal Arts (40%, n = 57). Other significant affiliations included Engineering (27%, n = 39), Business (4%, n = 6), and Sciences and Mathematics (7%, n = 10). The majority of students were in Engineering (59%, n = 286). The majority of students are in Engineering, followed by Liberal Arts (12%, n = 59), Business (9%, n = 41), and Sciences and Mathematics (8%, n = 37).

Table 2
www.frontiersin.org

Table 2. Descriptive statistics.

Among the students, 68% (n = 329) were undergraduates, and 32% (n = 154) were graduates. Approximately one-third of the faculty members (n = 48) were tenured, 17% (n = 24) were on the tenure track, 13% (n = 18) held full-time instructor or adjunct faculty positions, and the remaining 37% (n = 53) were categorized as “other.”

3.2 Correlation

Table 3 presents the correlation coefficients that examine the study variables' relationships among students. Age was found to be positively correlated with the number of years the students had been studying at the university (r = 0.315, p < 0.001) and negatively correlated with ethical considerations (r = −0.12, p < 0.001). Years students had been studying was also negatively correlated with ethical considerations (r = −0.10, p = 0.02). ChatGPT exposure assessment was also positively correlated with ChatGPT perception (r = 0.52, p < 0.001) and ethical considerations (r = 0.21, p < 0.001). A positive correlation was also found between ChatGPT perception and ethical considerations (r = 0.24, p < 0.001).

Table 3
www.frontiersin.org

Table 3. Students' correlation coefficients.

Table 4 displays the correlation coefficients between the different domains among faculty/instructor members. Age was highly correlated with the number of years faculty/instructors had been teaching or researching at the university (r = 0.73, p < 0.001). Years faculties are teaching was also positively correlated with ethical considerations (r = 0.17, p = 0.048). ChatGPT exposure assessment was negatively correlated with technology exposure assessment (r = −0.24, p = 0.004) and positively correlated with ChatGPT perception (r = 0.57, p < 0.001) and ethical considerations (r = 0.26, p = 0.001). A positive correlation was also found between ChatGPT perception and ethical considerations (r = 0.30, p < 0.001).

Table 4
www.frontiersin.org

Table 4. Faculty's correlation coefficients.

3.3 Mann-Whitney U-test

Since the data were not multivariate normal, we employed a non-parametric method to analyze the difference between the responses of faculty and students. Specifically, we used the Mann-Whitney U-test to determine if the differences between the responses were significant. The Mann-Whitney U (Mann and Whitney, 1947) test is appropriate in this context because it does not assume a normal distribution and is suitable for comparing two independent groups. This test allowed us to assess whether there are statistically significant differences in the responses of faculty members and students. Table 5 presents results of the Mann-Whitney U test comparing the responses of faculty and students.

Table 5
www.frontiersin.org

Table 5. The Mann-Whitney U test between faculty and students.

The analysis revealed a significant difference in the technology exposure assessment scores between students and faculty. Faculty members were more exposed to technology (mean = 5.65) compared to students (5.60): 90% (n = 129) of the faculties believed that working with technology is comfortable, while 84% of the students (n = 408) shared the same view. Additionally, ethical concerns related to the use of technology in their teaching/learning or research were encountered by 33% (n = 47) of faculty and 47% (n = 228) of students.

The ChatGPT exposure assessment revealed no significant difference between the two groups. However, the ChatGPT perception results showed a significant difference between students and faculty. Students were more favorable toward ChatGPT (mean = 5.45) compared to faculty (5.33): 72% (n = 347) of the students believed that ChatGPT could be a useful teaching/learning tool, compared to 50% (n = 72) of the faculty. Additionally, 65% (n = 314) of students and 44% (n = 63) of faculty reported ChatGPT to be beneficial for research. Regarding ChatGPT application in courses, 60% (n = 289) of the students were in favor, whereas 46% (n = 66) of the faculty members were open to experimenting with ChatGPT in their courses.

Figures 1, 2 show the benefits of ChatGPT. Figure 1 illustrates students' opinions on the specific applications of ChatGPT. Figure 1 reveals that 60% of the faculty/instructor members believed ChatGPT could be beneficial for writing and debugging code. Additionally, 53% found it useful for extracting data from text. This is followed by its usefulness in finding answers to assignments, composing emails and essays, and assisting with grading. These findings highlight the varied potential applications of ChatGPT as perceived by faculty/instructors.

Figure 1
Bar chart titled “Scores for Various Tasks” showing tasks ranked by scores. “Writing and debugging code” scores highest, followed by “Extracting data from text” and “Finding answers to assignment.” “Write a cover letter” scores lowest. Scores range from zero to just above zero point six.

Figure 1. Faculties' opinion regarding the specific application of ChatGPT.

Figure 2
Bar chart titled “Scores for Various Tasks” displaying tasks on the vertical axis and scores on the horizontal axis. The highest score is for “Writing and debugging code,” followed by “Extracting data from text” and “Finding answers to assignment.” The lowest score is for “Write a cover letter”.

Figure 2. Students' opinion regarding the specific application of ChatGPT.

Figure 2 shows that a notable 75% of the student participants reported that they found ChatGPT useful for writing and debugging code. This is followed by its applications in extracting data from text, finding answers to assignments, composing, and grading. Each of these applications highlights the diverse potential of ChatGPT in enhancing various aspects of academic work.

Finally, the ethical considerations domain revealed no significant difference between the two groups. However, 31% of the students (n = 151) believed that using ChatGPT (or another AI-powered tool) in coursework is ethical, while this figure for faculty was 17% (n = 25). In addition, 19% of the students (n = 94) believed that ChatGPT should be added to academic integrity violations, while 39% of the faculty (n = 56) supported this.

Additionally, we asked both faculty and students about specific classes or assignments where they would be concerned about the use of ChatGPT. Faculty members reported the following concerns (Table 6), and students reported their concerns (Table 7).

Table 6
www.frontiersin.org

Table 6. Faculty members reported concerns regarding students using ChatGPT.

Table 7
www.frontiersin.org

Table 7. Students reported concerns regarding students using ChatGPT.

3.4 Kruskal Wallis test

Given the non-normal distribution of the data, we utilized the Kruskal-Wallis test to analyze the variance. The Kruskal-Wallis test is a non-parametric alternative to the one-way ANOVA and is used to determine if there are statistically significant differences between the medians of three or more independent groups. This test is particularly useful when the assumptions of ANOVA, such as normality and homogeneity of variances, are not met (Kruskal and Wallis, 1952). Kruskal-Wallis test for demographic variables and the domains such as technology exposure, ChatGPT exposure, ChatGPT perception, and ethical considerations are shown in Table 8.

Table 8
www.frontiersin.org

Table 8. Kruskal-Wallis test for demographic variables and four variables.

3.4.1 Technology exposure assessment

The Kruskal-Wallis test indicated that affiliation was a statistically significant predictor of technology exposure. Faculty/instructor members (mean = 5.65) had more exposure to technology than their student counterparts (mean = 5.60), as shown in Table 8.

3.4.2 ChatGPT exposure assessment

Gender was found to be a statistically significant predictor of ChatGPT exposure. Male attendees reported higher exposure (mean = 5.53) than other genders. College affiliation was also a significant factor. The top three colleges reporting higher ChatGPT exposure were business (mean = 5.71), engineering (mean = 5.50), and human sciences (mean = 5.39). In contrast, pharmacy, agriculture, and veterinary medicine reported the lowest exposure levels (means = 5.09, 5.07, and 4.63, respectively).

3.4.3 ChatGPT perception

The results revealed that gender significantly influenced ChatGPT perception, with male participants reporting a higher average (mean = 5.50) than other genders. College affiliation also played a significant role. Business (mean = 5.72), human sciences (mean = 5.59), and education (mean = 5.51) reported the highest perceptions. Conversely, liberal arts, agriculture, and veterinary medicine had the lowest average perceptions (means = 5.27, 5.23, and 4.87, respectively). Affiliation was also a significant predictor, with students reporting higher ChatGPT perception (mean = 5.45) compared to faculty (mean = 5.33).

3.4.4 Ethical considerations

Demographic variables were not significant statistical predictors of the ethical considerations.

4 Discussion

The present study aimed to explore the perceptions and exposure of faculty members and students to ChatGPT in an academic setting. The findings reveal intriguing patterns and correlations that warrant further discussion.

A significant positive relationship was found between students' familiarity and experience with ChatGPT and their positive attitudes toward its utility in academic contexts. This relationship was based on the evaluation of two key factors: “ChatGPT exposure,” which encompassed whether students had previously heard of ChatGPT, their understanding of artificial intelligence, and their prior use of ChatGPT or other AI tools in non-academic or educational settings; and “ChatGPT perception,” which was determined by students' beliefs about the usefulness of ChatGPT in their research and educational activities as well as their interest in exploring ChatGPT's potential academic applications. This suggests that the more students interact with ChatGPT, the more favorably they view its capabilities and potential applications in academic contexts. Conversely, increased interaction with AI tools such as ChatGPT enhances students' positive attitudes toward the utility of such applications. This might be explained by the concept of Familiarity Breeds Acceptance, according to which increased exposure to something results in individuals developing a liking to it and often leads to a better understanding and appreciation of it. As students interact more with AI technologies, they become more comfortable and proficient in utilizing them. “Ethical Considerations” aim to explore the perceptions of students regarding the use of ChatGPT and other AI-powered tools in academic settings. They address various aspects, including the appropriateness, ethicality, and fairness of using such tools, as well as concerns about academic integrity and potential regulations. The questions also seek to understand whether educators would be inclined to explicitly prohibit or encourage the use of AI tools in their syllabi and whether there are specific contexts in which the use of these tools might be problematic. The study found positive correlations between “ChatGPT Exposure” and “ChatGPT Perception” with “Ethical Consideration” among students, indicating that those more familiar with ChatGPT and exposed to other AI tools tend to have higher ethical considerations regarding their use. This correlation can be attributed to several factors: increased awareness and understanding of AI capabilities and ethical implications (Fjeld et al., 2020), fostering critical engagement and ethical decision-making (Binns et al., 2018), practical experience with AI highlighting real-world ethical dilemmas (Mittelstadt et al., 2016), and the impact of educational initiatives such as courses and workshops that include AI ethics (Floridi and Cowls, 2022), as well as institutional policies and guidelines that emphasize the importance of ethical considerations (Jobin et al., 2019).

The negative correlation between ChatGPT and technology exposure assessments among the faculty hints at a similar trend of skepticism or cautious approaches toward new AI technologies among more tech-experienced faculty. The faculty with extensive exposure to existing technology might adhere to more traditional teaching methods and tools. This “technological conservatism” can lead to a reluctance to explore or adopt newer AI technologies such as ChatGPT (Baer, 1998; Trivellas and Dargenidou, 2009; Waggoner, 1984), especially if they feel that their current teaching methods are effective. That AI tools might eventually replace certain aspects of their role—particularly in areas such as grading, feedback, and even some forms of instruction—might be a concern for the faculty (D'Agostino, 2023; Prothero, 2023). On the other hand, the positive correlation between ChatGPT exposure and perception among the faculty aligns with the students' trends, suggesting that increased familiarity with ChatGPT tends to enhance its perceived usefulness and acceptance. As faculty members gain more exposure to ChatGPT, they become more familiar with its functionalities and potential benefits. This increased familiarity can lead to a more positive perception as they discover practical applications for the tool in their teaching and research. With increased use, they may start to appreciate the unique benefits of ChatGPT, such as efficiency in handling administrative tasks, aiding in research, and providing new teaching methodologies. The correlation between ChatGPT exposure and ethical consideration and ChatGPT perception and ethical consideration among faculty members suggests that increased interaction with AI technologies fosters a greater awareness and understanding of ethical issues. These findings align with the results observed among students, where a positive correlation was also found between ChatGPT exposure and ethical considerations. Just as with faculty members, students who are more exposed to ChatGPT demonstrate a heightened awareness of ethical issues.

These findings strongly support the Technology Acceptance Model, which suggests that users' perceptions of technology's usefulness and ease of use influence their intention to adopt it. The positive correlation observed between ChatGPT exposure and perception among both students and faculty indicates that greater familiarity is associated with higher perceived usefulness, a key construct within TAM (Davis, 1989; Venkatesh and Davis, 2000). Moreover, the generally more favorable attitudes among students, who tend to have higher levels of exposure, further illustrate how experience can shape user attitudes and behavioral intentions. Taken together, these results underscore the relevance of TAM in understanding how generative AI tools like ChatGPT are being adopted in academic environments (Davis, 1989; Venkatesh and Davis, 2000).

The significant difference in technology exposure assessment between students and faculty, as revealed by the Mann-Whitney U-test, indicates that faculty members, on average, are slightly more exposed to technology than students. This could be due to faculty members‘ greater access to technological resources, training, and the necessity to integrate technology into their teaching and research activities. However, the significant difference (p = 0.03) in ChatGPT perception between students and faculty suggests that students generally perceive ChatGPT more favorably compared to faculty members. This might be due to students' greater enthusiasm and openness to exploring new technologies for academic assistance, while faculty members may have more critical views on its application and implications (Fjeld et al., 2020; Binns et al., 2018).

The analysis of technology exposure assessment, ChatGPT perception, and demographic variables using the Kruskal-Wallis test showed a significant difference based on affiliation. Faculty/instructor members have more exposure to technology than students. This can be attributed to faculty members' greater access to technological resources, continuous training, and the necessity to integrate technology into their teaching and research activities. Faculty often have more opportunities and requirements to stay updated with the latest technological advancements to enhance their educational and research capabilities.

Gender was found to be a significant variable in ChatGPT exposure and ChatGPT perception. Male attendees reported higher exposure to ChatGPT compared to other genders. First, gender disparities in information and communication technology (ICT) competencies have been noted even among younger people, indicating that boys generally gravitate toward computer and technology proficiency, while girls lean toward information-related skills (Corneliussen, 2012; Punter et al., 2017). This disparity, it is possible, widens as people grow older. Second, in job-related activities, the distinct motivations of men and women become evident. While women may be driven by a desire to help others, men's inclination toward money, power, and fame (Abele and Spurk, 2011; Eccles et al., 1999; Eccles and Wang, 2016) may increase their pursuit of newer technologies, such as ChatGPT. Third, in academia, male researchers prefer to pursue risker research agendas that are multidisciplinary and lead to groundbreaking discoveries (Santos et al., 2021), leading them to seek broader avenues, including ChatGPT, for assistance in conducting innovative research.

The analysis of ChatGPT exposure and demographic variables showed a significant difference based on college affiliation. The top three colleges with higher exposure to ChatGPT were business, engineering, and human sciences. In contrast, pharmacy, agriculture, and veterinary medicine reported the lowest exposure. This variation can be due to the differing emphasis and integration of AI technologies in the curricula and research priorities of these colleges (Floridi and Cowls, 2022; Jobin et al., 2019). Business, Engineering, and Human Sciences colleges may prioritize the integration of AI technologies to enhance transparency, accountability, technical robustness, and practical applications in their respective fields, leading to higher exposure levels to tools like ChatGPT. Pharmacy, Agriculture, and Veterinary Medicine disciplines may focus more on specialized, discipline-specific technologies and may not integrate general AI tools like ChatGPT as extensively into their curricula and research priorities, leading to lower exposure levels.

The present study's findings align with prior research conducted in various international contexts, demonstrating that exposure to ChatGPT and generative AI tools is positively associated with favorable perceptions and higher engagement. For example, university professors in European institutions have expressed concern about the potential ethical risks associated with ChatGPT, particularly its impact on the validity and fairness of assessment practices (Kiryakova and Angelova, 2023). These trends mirror our results at Auburn University, where faculty similarly reported apprehension about students' overreliance on AI tools and the potential erosion of critical thinking and independent learning.

While many studies have examined student or faculty attitudes separately, fewer have provided a direct comparison within the same institution. This study's design evaluating both groups in the same educational and cultural context offers a more controlled comparison and reveals interdependent attitudes that are critical for institutional AI integration strategies.

5 Implications and future research

The study findings have significant implications for the integration of AI technologies such as ChatGPT in academic settings. The varying perceptions and exposure levels across gender and affiliations suggest a need for tailored approaches to AI integration in academia. Future research should focus on understanding the underlying factors that contribute to these differences and exploring strategies to promote more inclusive and equitable AI adoption in academia.

Additionally, the study highlights the importance of exposure to and familiarity with AI tools in shaping perceptions. Educational institutions should thus consider proactive measures to introduce and integrate AI technologies such as ChatGPT into their curricula and professional development programs, ensuring that all members of the academic community have equitable access and opportunities to engage with such emerging technologies.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

HA: Writing – original draft, Writing – review & editing. DA: Writing – original draft, Writing – review & editing. YF: Data curation, Formal analysis, Methodology, Software, Visualization, Writing – original draft. SB: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Gen AI was used in the creation of this manuscript.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abele, A. E., and Spurk, D. (2011). The dual impact of gender and the influence of timing of parenthood on men's and women's career development: longitudinal findings. Int. J. Behav. Dev. 35, 225–232. doi: 10.1177/0165025411398181

Crossref Full Text | Google Scholar

Abukmeil, M., Ferrari, S., Genovese, A., Piuri, V., and Scotti, F. A. (2021). Survey of unsupervised generative models for exploratory data analysis and representation learning. ACM Comput. Surv. 54, 99.1–99:40. doi: 10.1145/3450963

Crossref Full Text | Google Scholar

Ahmad Alhadi, F., Yasseen, B. T., and Jabr, M. (1999). Water stress and gibberellic acid effects on growth of fenugreek plants. Irrig Sci. 18, 185–190. doi: 10.1007/s002710050061

PubMed Abstract | Crossref Full Text | Google Scholar

Akiba, D., and Fraboni, M. C. (2023). AI-supported academic advising: exploring ChatGPT's current state and future potential toward student empowerment. Educ. Sci. 13:885. doi: 10.3390/educsci13090885

Crossref Full Text | Google Scholar

Ali, H., Fatemi, Y., Ali, D., Hamasha, M., and Hamasha, S. (2022). Investigating frontline nurse stress: perceptions of job demands, organizational support, and social support during the current COVID-19 pandemic. Front. Public Health 10:839600. doi: 10.3389/fpubh.2022.839600

PubMed Abstract | Crossref Full Text | Google Scholar

Ali, H., Fatemi, Y., Hamasha, M., and Modi, S. (2023a). The cost of frontline nursing: investigating perception of compensation inadequacy during the COVID-19 pandemic. J. Multidiscip. Healthc. 16, 1311–1326. doi: 10.2147/JMDH.S402761

PubMed Abstract | Crossref Full Text | Google Scholar

Ali, O., Murray, P. A., Momin, M., and Al-Anzi, F. S. (2023b). The knowledge and innovation challenges of ChatGPT: a scoping review. Technol. Soc. 75:102402. doi: 10.1016/j.techsoc.2023.102402

Crossref Full Text | Google Scholar

Alshater, M. M. (2022). Exploring the Role of Artificial Intelligence in Enhancing Academic Performance: A Case Study of ChatGPT. Rochester, NY. Available online at: https://papers.ssrn.com/abstract=4312358 (accessed January 9, 2024).

Google Scholar

Ansari, A. N., Ahmad, S., and Bhutta, S. M. (2023). Mapping the global evidence around the use of ChatGPT in higher education: a systematic scoping review. Educ. Inf. Technol. 29, 1–41. doi: 10.1007/s10639-023-12223-4

Crossref Full Text | Google Scholar

Aydin, Ö., and Karaarslan, E. (2023). Is ChatGPT Leading Generative AI? What is Beyond Expectations? Rochester, NY. Available from: https://papers.ssrn.com/abstract=4341500 (accessed December 20, 2023).

PubMed Abstract | Google Scholar

Baer, W. S. (1998). Will the Internet Transform Higher Education? Santa Monica, CA: Rand.

Google Scholar

Baidoo-Anu, D., and Owusu Ansah, L. (2023). Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning. Rochester, NY. Available online at: https://papers.ssrn.com/abstract=4337484 (accessed December 20, 2023).

Google Scholar

Baradel, C. (2023). Interaction, Design, and Assessment: An Exploratory Study on ChatGPT in Language Education (2023). Available online at: http://dspace.unive.it/handle/10579/25013 (accessed December 25, 2023).

Google Scholar

Belland, B. R., Walker, A. E., Olsen, M. W., and Leary, H. A. (2015). Pilot meta-analysis of computer-based scaffolding in STEM education. J. Educ. Technol. Soc. 18, 183–197.

PubMed Abstract | Google Scholar

Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N., et al. (2018). ““It's reducing a human being to a percentage”: perceptions of justice in algorithmic decisions,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY: Association for Computing Machinery), 1–14.

Google Scholar

Biswas, S. S. (2023). Potential use of chat GPT in global warming. Ann. Biomed. Eng. 51, 1126–1127. doi: 10.1007/s10439-023-03171-8

PubMed Abstract | Crossref Full Text | Google Scholar

Bom, H. S. H. (2023). Exploring the opportunities and challenges of ChatGPT in academic writing: a roundtable discussion. Nucl. Med. Mol. Imaging 57, 165–167. doi: 10.1007/s13139-023-00809-2

PubMed Abstract | Crossref Full Text | Google Scholar

Brown, C. C., Adams, C. E., George, K. E., and Moore, J. E. (2020). Associations between comorbidities and severe maternal morbidity. Obstet. Gynecol. 136, 892–901. doi: 10.1097/AOG.0000000000004057

PubMed Abstract | Crossref Full Text | Google Scholar

Carayon, P., Schoepke, J., Hoonakker, P. L. T., Haims, M. C., and Brunette, M. (2006). Evaluating causes and consequences of turnover intention among IT workers: the development of a questionnaire survey. Behav. Inf. Technol. 25, 381–397. doi: 10.1080/01449290500102144

PubMed Abstract | Crossref Full Text | Google Scholar

Carmines, E. G., and Zeller, R. A. (1979). Reliability and Validity Assessment. Beverly Hills, CA: SAGE Publications. 73.

Google Scholar

Chan, C. K. Y., and Tsi, L. H. Y. (2023). The AI Revolution in Education: Will AI Replace or Assist Teachers in Higher Education? Available online at: https://arxiv.org/abs/2305.01185v1 (accessed January 9, 2024).

Google Scholar

Corneliussen, H. G. (2012). Gender-Technology Relations. London: Palgrave Macmillan UK. Available online at: http://link.springer.com/10.1057/9780230354623 (accessed January 9, 2024).

Google Scholar

D'Agostino, S. (2023). Inside Higher Ed. Why Professors Are Polarized on AI. Available online at: https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/09/13/why-faculty-members-are-polarized-ai (accessed December 25, 2023).

Google Scholar

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13, 319–340. doi: 10.2307/249008

Crossref Full Text | Google Scholar

De Angelis, L., Baglivo, F., Arzilli, G., Privitera, G. P., Ferragina, P., Tozzi, A. E., et al. (2023). ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Front. Public Health 11:1166120. doi: 10.3389/fpubh.2023.1166120

PubMed Abstract | Crossref Full Text | Google Scholar

Dehouche, N. (2021). Plagiarism in the age of massive generative pre-trained transformers (Gpt-3). Ethics Sci. Environ. Polit. 21, 17–23. doi: 10.3354/esep00195

Crossref Full Text | Google Scholar

Ding, H., Wu, J., Zhao, W., Matinlinna, J. P., Burrow, M. F., Tsoi, J. K. H., et al. (2023). Artificial intelligence in dentistry—A review. Front. Dent. Med. (2023) 4:1085251. doi: 10.3389/fdmed.2023.1085251

PubMed Abstract | Crossref Full Text | Google Scholar

Eccles, J., Barber, B., and Jozefowicz, D. (1999). Linking Gender to Education, Occupation, and Recreational Choices: Applying the Eccles et al. Model of Achievement-Related Choices (Washington, DC: American Psychological Association), 153–192.

Google Scholar

Eccles, J. S., and Wang, M. T. (2016). What motivates females and males to pursue careers in mathematics and science? Int. J. Behav. Dev. 40, 100–106. doi: 10.1177/0165025415616201

Crossref Full Text | Google Scholar

Firat, M. (2023). What ChatGPT means for universities: perceptions of scholars and students. J. Appl. Learn. Teach. 6, 57–63. doi: 10.37074/jalt.2023.6.1.22

Crossref Full Text | Google Scholar

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., and Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. Rochester, NY. Available online at: https://papers.ssrn.com/abstract=3518482 (accessed June 18, 2024).

Google Scholar

Floridi, L., and Cowls, J. A. (2022). Unified Framework of Five Principles for AI in Society. In: Machine Learning and the City. John Wiley & Sons, Ltd, 535–545. Available online at: https://onlinelibrary.wiley.com/doi/abs/10.1002/9781119815075.ch45 (accessed June 18, 2024).

Google Scholar

González-Arias, C., and López-García, X. (2023). ChatGPT: Stream of opinion in five newspapers in the first 100 days since its launch. Prof. Inf. 32:24. doi: 10.3145/epi.2023.sep.24

Crossref Full Text | Google Scholar

Groumpos, P. P. A. (2023). Critical historic overview of artificial intelligence: issues, challenges, opportunities, and threats. Artif. Intell. Appl. 1, 197–213. doi: 10.47852/bonviewAIA3202689

Crossref Full Text | Google Scholar

Haleem, A., Javaid, M., and Singh, R. P. (2022). An era of ChatGPT as a significant futuristic support tool: a study on features, abilities, and challenges. BenchCouncil Transact. Benchm. Stand. Eval. 2:100089. doi: 10.1016/j.tbench.2023.100089

Crossref Full Text | Google Scholar

Holzinger, A., Keiblinger, K., Holub, P., Zatloukal, K., and Müller, H. (2023). AI for life: trends in artificial intelligence for biotechnology. Nano Biotechnol. 74, 16–24. doi: 10.1016/j.nbt.2023.02.001

PubMed Abstract | Crossref Full Text | Google Scholar

Hu, L. (2022). Generative AI and Future. Medium. Available online at: https://pub.towardsai.net/generative-ai-and-future-c3b1695876f2 (accessed December 20, 2023).

Google Scholar

Ipek, Z. H., Gözüm, A. I. C., Papadakis, S., and Kallogiannakis, M. (2023). Educational applications of the ChatGPT AI system: a systematic review research. Educ. Process Int. J. 12, 26–55. doi: 10.22521/edupij.2023.123.2

Crossref Full Text | Google Scholar

Iqbal, N., Ahmed, H., and Azhar, K. A. (2022). Exploring teachers' attitudes towards using chatgpt. Glob. J. Manag. Administ. Sci. 3, 97–111. doi: 10.46568/gjmas.v3i4.163

PubMed Abstract | Crossref Full Text | Google Scholar

Jobin, A., Ienca, M., and Vayena, E. (2019). The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389–399. doi: 10.1038/s42256-019-0088-2

Crossref Full Text | Google Scholar

Jovanovic, M., and Campbell, M. (2022). Generative artificial intelligence: trends and prospects. Computer 55, 107–112. doi: 10.1109/MC.2022.3192720

Crossref Full Text | Google Scholar

Jowarder, M. I. (2023). The influence of ChatGPT on social science students: insights drawn from undergraduate students in the United States. Indones. J. Innovat. Appl. Sci. 3, 194–200. doi: 10.47540/ijias.v3i2.878

Crossref Full Text | Google Scholar

Ju, S. (2023). ChatGPT in Academia and the Workplace. Career Center | University of Southern California. Available online at: https://careers.usc.edu/blog/2023/11/21/chatgpt-in-academia-and-the-workplace/ (accessed December 20, 2023).

PubMed Abstract | Google Scholar

Kalonde, G., and Mousa, R. (2016). Technology familiarization to preservice teachers: factors that influence teacher educators' technology decisions. J. Educ. Technol. Syst. 45, 236–255. doi: 10.1177/0047239515616965

Crossref Full Text | Google Scholar

Kaplan, A., and Haenlein, M. (2019). Siri, in my hand: who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus. Horiz. 62, 15–25. doi: 10.1016/j.bushor.2018.08.004

Crossref Full Text | Google Scholar

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Ind. Diff. 103:102274. doi: 10.1016/j.lindif.2023.102274

PubMed Abstract | Crossref Full Text | Google Scholar

Katar, O., Özkan, D., Yildirim, Ö., and Acharya, U. R. (2023). Evaluation of GPT-3 AI language model in research paper writing. TJST 18, 311–8. doi: 10.55525/tjst.1272369

PubMed Abstract | Crossref Full Text | Google Scholar

Kim, S. K. A., and Wong, U. H. (2023). “ChatGPT impacts on academia,” in 2023 International Conference on System Science and Engineering (ICSSE), 422–426. Available online at: https://ieeexplore.ieee.org/abstract/document/10227188?casa_token=0NLJ9iYPNWQAAAAA:4KUdEpeI0h4tZWdz36nraUp7gYxMWhs5fmi0UGzChNw8W3MdwMBQ9Q_4VLr2jDkslpdV338Tgg (accessed February 26, 2024).

Google Scholar

Kiryakova, G., and Angelova, N. (2023). ChatGPT—A challenging tool for the university professors in their teaching practice. Educ. Sci. 13:1056. doi: 10.3390/educsci13101056

Crossref Full Text | Google Scholar

Koo, T. K., and Li, M. Y. (2016). Guideline of selecting and reporting intraclass correlation coefficients for reliability research. J. Chiropract. Med. 15, 155–163. doi: 10.1016/j.jcm.2016.02.012

PubMed Abstract | Crossref Full Text | Google Scholar

Kruskal, W. H., and Wallis, W. A. (1952). Use of ranks in one-criterion variance analysis. J. Am. Stat. Assoc. 47, 583–621. doi: 10.1080/01621459.1952.10483441

Crossref Full Text | Google Scholar

Kumar, A. H. (2023). Analysis of ChatGPT tool to assess the potential of its utility for academic writing in biomedical domain. Biol. Eng. Med. Sci. Rep. 9, 24–30. doi: 10.5530/bems.9.1.5

Crossref Full Text | Google Scholar

Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 13:410. doi: 10.3390/educsci13040410

Crossref Full Text | Google Scholar

Mann, H. B., and Whitney, D. R. (1947). On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Stat. 18, 50–60. doi: 10.1214/aoms/1177730491

Crossref Full Text | Google Scholar

Marr, B. F. (2023). A Short History Of ChatGPT: How We Got To Where We Are Today. Available online at: https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/ (accessed December 20, 2023).

Google Scholar

Mhlanga, D. (2023). Open AI in Education, the Responsible and Ethical Use of ChatGPT Towards Lifelong Learning. Rochester, NY. Available online at: https://papers.ssrn.com/abstract=4354422 (accessed January 9, 2024).

Google Scholar

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., and Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big Data Soc. 3:2053951716679679. doi: 10.1177/2053951716679679

Crossref Full Text | Google Scholar

Mondal, H., and Mondal, S. (2023). ChatGPT in academic writing: maximizing its benefits and minimizing the risks. Indian J. Ophthalmol. 71:3600. doi: 10.4103/IJO.IJO_718_23

PubMed Abstract | Crossref Full Text | Google Scholar

Murtagh, F., and Heck, A. (2012). Multivariate Data Analysis. Dordrecht: Springer Science & Business Media, 225.

Google Scholar

Nam, B. H., and Bai, Q. (2023). ChatGPT and its ethical implications for STEM research and higher education: a media discourse analysis. Int. J. STEM Educ. 10:66. doi: 10.1186/s40594-023-00452-5

Crossref Full Text | Google Scholar

Needleman, E. (2023). Would Chat GPT Get a Wharton MBA? New White Paper By Christian Terwiesch. Mack Institute for Innovation Management. Available online at: https://mackinstitute.wharton.upenn.edu/2023/would-chat-gpt3-get-a-wharton-mba-new-white-paper-by-christian-terwiesch/ (accessed January 9, 2024).

Google Scholar

O'Connor, S. (2023). Open artificial intelligence platforms in nursing education: tools for academic progress or abuse? Nurse Educ. Pract. 66:103537. doi: 10.1016/j.nepr.2022.103537

PubMed Abstract | Crossref Full Text | Google Scholar

Oh, E., and Reeves, T. C. (2014). “Generational differences and the integration of technology in learning, instruction, and performance,” in Handbook of Research on Educational Communications and Technology, eds J. M., Spector, M. D. Merrill, J. Elen, and M. J. Bishop (New York, NY: Springer), 819–28. Available online at: https://doi.org/10.1007/978-1-4614-3185-5_66 (accessed December 25, 2023).

Google Scholar

Ooi, K. B., Tan, G. W. H., Al-Emran, M., Al-Sharafi, M. A., Capatina, A., Chakraborty, A., et al. (2023). The potential of generative artificial intelligence across disciplines: perspectives and future directions. J. Comp. Inf. Syst. 0, 1–32. doi: 10.1080/08874417.2023.2261010

Crossref Full Text | Google Scholar

Pandey, S. (2023). The Evolution of Generative AI: A Journey from Eliza to Deep Learning. LinkedIn. Available online at: https://www.linkedin.com/pulse/evolution-generative-ai-journey-from-eliza-deep-learning-pandey/ (accessed December 20, 2023).

Google Scholar

Pardos, Z. A., and Bhandari, S. (2023). Learning gain differences between ChatGPT and human tutor generated algebra hints. arXiv. doi: 10.48550/arXiv.2302.06871

Crossref Full Text | Google Scholar

Patel, S. B., and Lam, K. (2023). ChatGPT: the future of discharge summaries? Lancet Digital Health 5, e107–e108. doi: 10.1016/S2589-7500(23)00021-3

PubMed Abstract | Crossref Full Text | Google Scholar

Pavlik, J. V. (2023). Collaborating with ChatGPT: considering the implications of generative artificial intelligence for journalism and media education. J. Mass Commun. Educ. 78, 84–93. doi: 10.1177/10776958221149577

Crossref Full Text | Google Scholar

Polit, D. F., and Beck, C. T. (2006). The content validity index: are you sure you know what's being reported? Critique and recommendations. Res. Nurs. Health 29, 489–497. doi: 10.1002/nur.20147

PubMed Abstract | Crossref Full Text | Google Scholar

Prothero, A. (2023). Will Artificial Intelligence Help Teachers—or Replace Them? Education Week.

Google Scholar

Punter, R. A., Meelissen, M. R., and Glas, C. A. (2017). Gender differences in computer and information literacy: an exploration of the performances of girls and boys in ICILS 2013. Eur. Educ. Res. J. 16, 762–780. doi: 10.1177/1474904116672468

Crossref Full Text | Google Scholar

Qadir, J. (2023). “Engineering education in the era of ChatGPT: promise and pitfalls of generative AI for education,” in 2023 IEEE Global Engineering Education Conference (EDUCON), 1–9. Available online at: https://ieeexplore.ieee.org/document/10125121 (accessed January 9, 2024).

Google Scholar

Rane, N. (2023). ChatGPT and Similar Generative Artificial Intelligence (AI) for Smart Industry: Role, Challenges and Opportunities for Industry 4.0, Industry 5.0 and Society 5.0. Rochester, NY. Available online at: https://papers.ssrn.com/abstract=4603234 (accessed December 20, 2023).

Google Scholar

Rasul, T., Nair, S., Kalendra, D., Robin, M., Santini F de, O., Ladeira, W. J., et al. (2023). The role of ChatGPT in higher education: benefits, challenges, and future research directions. J. Appl. Learn. Teach. 6, 41–56. doi: 10.37074/jalt.2023.6.1.29

Crossref Full Text | Google Scholar

Ray, P. P. (2023). ChatGPT: a comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Int. Things Cyber Phys. Syst. 3, 121–154. doi: 10.1016/j.iotcps.2023.04.003

Crossref Full Text | Google Scholar

Rayner, M. (2023). ChatGPT Acts as Though It has Strong Ethical Intuitions. Available online at: https://www.goodreads.com/book/show/78816542-chatgpt-acts-as-though-it-has-strong-ethical-intuitions-even-though-it (accessed January 9, 2024).

Google Scholar

Roose, K. (2022). The Brilliance and Weirdness of ChatGPT. The New York Times. Available online at: https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html (accessed December 20, 2023).

Google Scholar

Rudolph, J., Tan, S., and Tan, S. (2023). ChatGPT: bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 6, 342–363. doi: 10.37074/jalt.2023.6.1.9

Crossref Full Text | Google Scholar

Santos, J. M., Horta, H., and Amâncio, L. (2021). Research agendas of female and male academics: a new perspective on gender disparities in academia. Gend. Educ. 33, 625–643. doi: 10.1080/09540253.2020.1792844

Crossref Full Text | Google Scholar

Shoufan, A. (2023). Exploring students' perceptions of ChatGPT: thematic analysis and follow-up survey. IEEE Access 11, 38805–38818. doi: 10.1109/ACCESS.2023.3268224

Crossref Full Text | Google Scholar

Sok, S., and Heng, K. (2023). ChatGPT for Education and Research: A Review of Benefits and Risks. Rochester, NY. Available online at: https://papers.ssrn.com/abstract=4378735 (accessed January 9, 2024).

Google Scholar

Sullivan, M., Kelly, A., and Mclaughlan, P. (2023). ChatGPT in higher education: considerations for academic integrity and student learning. J. Appl. Learn. Teach. 6, 1–10. doi: 10.37074/jalt.2023.6.1.17

Crossref Full Text | Google Scholar

Tate, T., Doroudi, S., Ritchie, D., Xu, Y., and Uci, M. W. (2023). Educational Research and AI-Generated Writing: Confronting the Coming Tsunami. Available online at: https://osf.io/4mec3 (Accessed October 10, 2024).

Google Scholar

Tatnall, A. (2012). History of Computer Hardware and Software Development. Computer Science and Engineering.

Google Scholar

Tavakol, M., and Dennick, R. (2011). Making sense of Cronbach's alpha. Int. J. Med. Educ. 2, 53–55. doi: 10.5116/ijme.4dfb.8dfd

PubMed Abstract | Crossref Full Text | Google Scholar

Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., et al. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ. 10:15. doi: 10.1186/s40561-023-00237-x

Crossref Full Text | Google Scholar

Trivellas, P., and Dargenidou, D. (2009). Leadership and service quality in higher education: the case of the Technological Educational Institute of Larissa. Int. J. Qual. Serv. Sci. 1, 294–310. doi: 10.1108/17566690911004221

Crossref Full Text | Google Scholar

Vaske, J. J., Beaman, J., and Sponarski, C. C. (2017). Rethinking internal consistency in Cronbach's alpha. Leis. Sci. 39, 163–173. doi: 10.1080/01490400.2015.1127189

Crossref Full Text | Google Scholar

Venkatesh, V., and Davis, F. D. A. (2000). theoretical extension of the technology acceptance model: four longitudinal field studies. Manage. Sci. 46, 186–204. doi: 10.1287/mnsc.46.2.186.11926

PubMed Abstract | Crossref Full Text | Google Scholar

Waggoner, M. (1984). The new technologies versus the lecture tradition in higher education: is change possible? Educ. Technol. 24, 7–13.

Google Scholar

Walsh, T. (2022). The Conversation. Everyone's Having a Field Day wiTh ChatGPT – But Nobody Knows How It Actually Works. Available online at: http://theconversation.com/everyones-having-a-field-day-with-chatgpt-but-nobody-knows-how-it-actually-works-196378 (accessed December 20, 2023).

Google Scholar

Wu, T., and Zhang, S. (2023). Applications and Implication of Generative AI in Non- STEM Disciplines in Higher Education. Singapore: Springer Nature Singapore.

Google Scholar

Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Front. Psychol. 14:1181712. doi: 10.3389/fpsyg.2023.1181712

PubMed Abstract | Crossref Full Text | Google Scholar

Zhang, B., Zhu, J., and Su, H. (2023). Toward the third generation artificial intelligence. Sci China Inf Sci. 66:121101. doi: 10.1007/s11432-021-3449-x

Crossref Full Text | Google Scholar

Keywords: ChatGPT, academia, technology, survey, ethic

Citation: Ali H, Ali D, Fatemi Y and Bharadwaj SS (2025) ChatGPT perceptions, experiences, and uses with emphasis on academia. Front. Educ. 10:1559509. doi: 10.3389/feduc.2025.1559509

Received: 12 January 2025; Accepted: 30 June 2025;
Published: 31 July 2025.

Edited by:

Huichun Liu, Guangzhou University, China

Reviewed by:

Sarah Elizabeth Rose, Staffordshire University, United Kingdom
Dennis Arias-Chávez, Continental University, Peru

Copyright © 2025 Ali, Ali, Fatemi and Bharadwaj. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Yasin Fatemi, eXpmMDAyNEBhdWJ1cm4uZWR1

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.