MINI REVIEW article

Front. Big Data, 23 August 2023

Sec. Machine Learning and Artificial Intelligence

Volume 6 - 2023 | https://doi.org/10.3389/fdata.2023.1224976

Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer

  • 1. Faculty of Law, Bond University, Gold Coast, QLD, Australia

  • 2. Institute for Human Resource Management, WU Vienna, Vienna, Austria

Article metrics

View details

45

Citations

12,6k

Views

3,7k

Downloads

Abstract

ChatGPT, a new language model developed by OpenAI, has garnered significant attention in various fields since its release. This literature review provides an overview of early ChatGPT literature across multiple disciplines, exploring its applications, limitations, and ethical considerations. The review encompasses Scopus-indexed publications from November 2022 to April 2023 and includes 156 articles related to ChatGPT. The findings reveal a predominance of negative sentiment across disciplines, though subject-specific attitudes must be considered. The review highlights the implications of ChatGPT in many fields including healthcare, raising concerns about employment opportunities and ethical considerations. While ChatGPT holds promise for improved communication, further research is needed to address its capabilities and limitations. This literature review provides insights into early research on ChatGPT, informing future investigations and practical applications of chatbot technology, as well as development and usage of generative AI.

1. Introduction

ChatGPT, developed by OpenAI, is a new language model that has generated significant buzz within the technology industry and beyond. With the launch of artificial intelligence-based Chat Generative Pre-trained Transformer (ChatGPT), OpenAI has taken the academic community by storm, forcing scientists, editors and publishers of scientific journals to rethink and adjust their publication policies and strategies. Whereas availability of ChatGPT has been sanctioned in some jurisdictions (e.g., China, Italy), like the creation of the internet, the emergence of ChatGPT may possibly become a marking line of a new era, and scholars need to embrace this technological development. Since its release, researchers have been exploring its capabilities and limitations across various fields such as healthcare, business, psychology, and computer science, building on the research of earlier language models (Testoni et al., 2022; Rocca et al., 2023; Roy et al., 2023). This literature review aims to provide an overview of early ChatGPT literature in multiple disciplines, analyzing how it is being used and what implications this has for future research and practical applications.

The literature reviewed in this study includes a range of perspectives on ChatGPT, from its potential benefits and drawbacks to ethical considerations related to the technology (Seth et al., 2023). The findings suggest that while early research is still limited by the scope of available data, there are already some clear implications for future research and practical applications in various fields. For example, many scholars have raised concerns about ChatGPT's potential impact on employment opportunities across different industries (Qadir, 2022; Ai et al., 2023). While early studies suggest promising results for chatbot technology in healthcare settings, there are still significant ethical considerations (Rahimi and Talebi Bezmin Abadi, 2023) to be addressed before widespread implementation can occur.

ChatGPT uses advanced machine learning techniques to generate natural language responses, making it an attractive tool for various industries seeking more efficient communication with customers or clients. Its potential applications range from customer service chatbots to virtual assistants in healthcare settings (Sallam, 2023). However, as ChatGPT is still a relatively new technology, there are many questions about its capabilities and limitations that need to be addressed by researchers across different fields. This literature review aims to provide insights into how early research on ChatGPT has evolved, highlighting key findings from sentiment analysis of articles related to chatbot technology in various academic areas.

2. Methodology

While most of the discussion takes place in the media, in committee meetings or informal fora, we also see that systematic scholarly research has started to emerge rapidly. From the launch of ChatGPT in late November 2022 until April 2023 there were 154 publications, with only two publications released in 2022. While we appreciate all the research dedicated explicitly to new technologies, for quality assurance, we limit our review to sources included in the rigorously monitored Scopus database, and in this paper we report only on a review of Scopus-indexed publications.

The sentiment analysis conducted in two popular software packages using different dictionaries showed dominance of negative sentiment in all papers examined and across all disciplines. We however refrain from conclusions about a general negative sentiment, since words expressing attitudes are subject-specific. Therefore, we selected a sub-sample of papers in the three disciplines (using the Scopus classification) in which authors have completed both formal education and possess research experience: (1) economics, econometrics, and finance, (2) business, management, and accounting, and (3) social sciences, which we read paragraph by paragraph, assessing sentiment of each as positive, neutral, or negative.

When reviewing publications for this paper, we followed usual procedures recommended for literature reviews in new and emerging fields of research (Gancarczyk et al., 2022; Liang et al., 2022). Having set the scope of the research to only Scopus-indexed publications published between November 2022 and April 2023, we first identified papers which contain the name “ChatGPT” either in the title, abstract, or keywords. This resulted in 156 entries. Next, we sorted out the received pool of papers into 22 subject areas. One hundred forty publications fitted into the pre-established categories, while the remaining 16 were classified as multidisciplinary. For details on the distribution and actual publications, see Table 1.

Table 1

Subject areaNumber of articlesCitations
Medicine64Ahn, 2023; Alberts et al., 2023; Ali and Djalilian, 2023; Ali et al., 2023; Anderson et al., 2023; Ang et al., 2023; Berger and Schneider, 2023; Bernstein, 2023; Bhatia and Kulkarni, 2023; Bhattacharya et al., 2023; Borges, 2023; Boßelmann et al., 2023; Cascella et al., 2023; Cox, 2023; Curtis, 2023; Dahmen et al., 2023; D'Amico et al., 2023; DiGiorgio and Ehrenfeld, 2023; Donato et al., 2023; Doshi et al., 2023; Eardley, 2023; Elwood, 2023; Fijačko et al., 2023; Gordijn and Have, 2023; Hirosawa et al., 2023; Homolak, 2023; Huang et al., 2023; Huh, 2023b; Jungwirth and Haluza, 2023; Kahambing, 2023; Khan et al., 2023; Kim, 2023; Krettek, 2023; Lahat and Klang, 2023; Lecler et al., 2023; Lee, 2023b; Levin et al., 2023; Liebrenz et al., 2023; Looi, 2023; Macdonald et al., 2023; Maeker and Maeker-Poquet, 2023; Mann, 2023; Mogali, 2023; Moisset and Ciampi, 2023; Naumova, 2023; Nuryana and Pranolo, 2023; Ollivier et al., 2023; Park et al., 2023; Patel and Lam, 2023; Paul et al., 2023; Potapenko et al., 2023; Prada et al., 2023; Quintans-Júnior et al., 2023; Rozencwajg and Kantor, 2023; Sallam, 2023; Salvagno et al., 2023; Schorrlepp and Patzer, 2023; Šlapeta, 2023; Strunga et al., 2023; Temsah et al., 2023; The Lancet Digital Health, 2023; Yadava, 2023
Social Sciences56Abdel-Messih and Kamel Boulos, 2023; Adetayo, 2023; Arif et al., 2023; Castro Nascimento and Pimentel, 2023; Chen, 2023; Choi et al., 2023; Cooper, 2023; Costello, 2023; Cotton et al., 2023; Cox and Tzoc, 2023; Crawford et al., 2023; Dasborough, 2023; Dwivedi et al., 2023; Emenike and Emenike, 2023; Eysenbach, 2023; Fergus et al., 2023; Fernandez, 2023; Gašević et al., 2023; Gilson et al., 2023; Gordijn and Have, 2023; Gregorcic and Pendrill, 2023; Haman and Školník, 2023; Harder, 2023; Hu, 2023; Huh, 2023a,b,c; Humphry and Fuller, 2023; Iskender, 2023; Johinke et al., 2023; Karaali, 2023; Kasneci et al., 2023; Lee, 2023b; Lim et al., 2023; Lin et al., 2023; Lund and Wang, 2023; Lund et al., 2023; Masters, 2023a,b; Morreel et al., 2023; Nautiyal et al., 2023; O'Connor, 2023; Panda and Kaur, 2023; Pavlik, 2023; Perkins, 2023; Rospigliosi, 2023; Schijven and Kikkawa, 2023; Siegerink et al., 2023; Strunga et al., 2023; Subramani et al., 2023; Tang, 2023; Teixeira da Silva, 2023; Tlili et al., 2023; Tsigaris and Teixeira da Silva, 2023; Yeo-Teh and Tang, 2023
Computer Science25Adetayo, 2023; Aljanabi et al., 2023; Budler et al., 2023; Cascella et al., 2023; Castro Nascimento and Pimentel, 2023; DiGiorgio and Ehrenfeld, 2023; Du et al., 2023; Dwivedi et al., 2023; Fernandez, 2023; Gao et al., 2023; Gašević et al., 2023; Haluza and Jungwirth, 2023; Lin et al., 2023; Lund and Wang, 2023; Lund et al., 2023; Mijwil et al., 2023; Panda and Kaur, 2023; Rospigliosi, 2023; Schijven and Kikkawa, 2023; Taecharungroj, 2023; Teubner et al., 2023; Thurzo et al., 2023; Tlili et al., 2023; Wang et al., 2023; Zhou et al., 2023
Multidisciplinary16Graham, 2022, 2023a,b; Stokel-Walker, 2022, 2023; An et al., 2023; Else, 2023; Lahat et al., 2023; Owens, 2023; Seghier, 2023; Stokel-Walker and Van Noorden, 2023; Thorp, 2023; Tools such as ChatGPT threaten transparent science; here are our ground rules for their use, 2023; Tregoning, 2023; van Dis et al., 2023; Wang S. H., 2023
Health Professions14Ali et al., 2023; Anderson et al., 2023; Cascella et al., 2023; DiGiorgio and Ehrenfeld, 2023; Huh, 2023a,c; Lecler et al., 2023; Lee, 2023a; Liebrenz et al., 2023; Patel and Lam, 2023; Sallam, 2023; Strunga et al., 2023; The Lancet Digital Health, 2023; Thurzo et al., 2023
Nursing14Ahn, 2023; Choi et al., 2023; Doshi et al., 2023; Fijačko et al., 2023; Gunawan, 2023; Harder, 2023; O'Connor, 2023; Odom-Forren, 2023; Sallam, 2023; Scerri and Morin, 2023; Siegerink et al., 2023; Strunga et al., 2023; Teixeira da Silva, 2023; Thomas, 2023
Engineering11Biswas, 2023a,b; Cooper, 2023; Du et al., 2023; Gao et al., 2023; Haluza and Jungwirth, 2023; Huang et al., 2023; Lin et al., 2023; Tong and Zhang, 2023; Wang et al., 2023; Zhou et al., 2023
Decision Sciences10Ali et al., 2023; Anders, 2023; Chatterjee and Dethlefs, 2023; Dwivedi et al., 2023; Elali and Rachid, 2023; Jungwirth and Haluza, 2023; Liebrenz et al., 2023; Lund et al., 2023; Patel and Lam, 2023; The Lancet Digital Health, 2023
Business, Management and Accounting9Ameen et al., 2023; Dasborough, 2023; Dwivedi et al., 2023; Iskender, 2023; Lim et al., 2023; Nautiyal et al., 2023; Paul et al., 2023; Short and Short, 2023; Taecharungroj, 2023
Psychology8Berger and Schneider, 2023; Bhatia and Kulkarni, 2023; Dasborough, 2023; Kahambing, 2023; Kasneci et al., 2023; Nuryana and Pranolo, 2023; Paul et al., 2023; Thurzo et al., 2023
Biochemistry, Genetics and Molecular Biology7Borges, 2023; Cahan and Treutlein, 2023; Hallsworth et al., 2023; Holzinger et al., 2023; Subramani et al., 2023; Tong and Zhang, 2023; Will ChatGPT transform healthcare?, 2023
Chemistry6Castro Nascimento and Pimentel, 2023; Emenike and Emenike, 2023; Fergus et al., 2023; Humphry and Fuller, 2023; Rillig et al., 2023; Zhu et al., 2023
Environmental Science6Halloran et al., 2023; Hirosawa et al., 2023; Jungwirth and Haluza, 2023; Lin et al., 2023; Rillig et al., 2023; Zhu et al., 2023
Mathematics6Du et al., 2023; Gao et al., 2023; Haluza and Jungwirth, 2023; Harder, 2023; Karaali, 2023; Wang et al., 2023
Immunology and Microbiology5Hallsworth et al., 2023; Quintans-Júnior et al., 2023; Šlapeta, 2023; Temsah et al., 2023; Tong and Zhang, 2023
Chemical Engineering4Castro Nascimento and Pimentel, 2023; Hallsworth et al., 2023; Holzinger et al., 2023; Huang et al., 2023
Economics, Econometrics and Finance3Ai et al., 2023; Dowling and Lucey, 2023; Paul et al., 2023
Neuroscience3Boßelmann et al., 2023; Graf and Bernardi, 2023; Moisset and Ciampi, 2023
Physics and Astronomy3Gregorcic and Pendrill, 2023; Karimabadi et al., 2023; Wang J., 2023
Arts and Humanities2Costello, 2023; Floridi, 2023
Agricultural and Biological Sciences1Borges, 2023
Dentistry1Sardana et al., 2023
Energy1Lin et al., 2023

Early SCOPUS indexed publications on ChatGPT (through 8 April 2023).

3. Limitations

Inevitably, given the time scope of our review, the research reviewed here is all based on the 3rd version of ChatGPT and its various iterations. Version 4, released in mid-March 2023, offers considerable amendments, for instance accepts image input, and is capable of generating longer texts (Bhattacharya et al., 2023). Even though the fundamental assumptions and the basis on which ChatGPT works remains comparable, the greater variety of usage will lead to more profound impact on the work of scholars and what scientific institutions can achieve, as well as on recipients of academic research. Consequently, we expect fast emergence of further research on ChatGPT, and this review should serve only as a record of initial reactions in scholarly literature.

4. Discussion

The early research on ChatGPT suggests a range of potential benefits and drawbacks across various fields such as healthcare, business, psychology, and computer science, among others. Like the beginnings of the internet or the creation of digital assets (Lawuobahsumo et al., 2022; Kapengut and Mizrach, 2023; Watters, 2023), ChatGPT and its underlying technology have the opportunity for both positive and negative disruption. While many scholars have raised concerns about the impact of ChatGPT on employment opportunities in different industries, there are also significant ethical considerations to be addressed before widespread implementation can occur.

The negative sentiment expressed in the literature toward ChatGPT is noteworthy, as it suggests that there are concerns or challenges associated with using this technology in various fields. While some studies have highlighted the potential benefits of ChatGPT, such as its ability to generate human-like responses and improve user experience, others have raised ethical and practical issues related to privacy, bias, transparency, and accountability. For instance, some researchers have argued that although OpenAI pays special attention to eliminate abusive vocabulary and hate-speech by design, the generative AI tools trained on text from the open Internet may still perpetuate or even amplify existing biases in language use and data representation, leading to discriminatory outcomes for certain groups of people (e.g., people who do not classify into a binary gender classification, or ethnic minorities). While important for language models, this issue has overlap with concerns surrounding social media and other sources of information (Thornhill et al., 2019; Kurpicz-Briki and Leoni, 2021), the impact on policy making (Lamba et al., 2021), and the risk of fake news (Wu and Liu, 2018; Shu et al., 2019). Others have pointed out the limitations of current models in terms of their ability to handle complex social interactions, emotional expressions, and cultural nuances that are essential for effective communication with humans. Therefore, it is essential to ensure that chatbot technology is trained on diverse datasets that represent different demographics and cultures. Additionally, privacy concerns arise when personal information is collected by ChatGPT during conversations with users. It is crucial to establish clear guidelines for data collection and usage to protect user privacy. Furthermore, transparency and accountability are essential in chatbot technology to ensure that users understand how their data is being used and who has access to it. As researchers continue to explore this new technology, it will be important to consider both the benefits and drawbacks of chatbot technology to fully understand its implications for future research and practical applications.

Early research on ChatGPT suggests that while there are clear implications for future research and practical applications in various fields, further studies need to be conducted to fully understand its capabilities and limitations. This includes addressing ethical considerations such as privacy concerns and bias in data sets used by ChatGPT. Despite the potential benefits of chatbot technology, early research is still limited by the scope of available data. However, as ChatGPT continues to evolve and become more advanced, it has the potential to revolutionize communication across various industries. For instance, customer service chatbots can provide 24/7 support to customers, reducing wait times and improving overall satisfaction. While one might expect a positive reception of transformative technologies in the academic literature, the negative sentiment in the early literature may be explained by the types of literature. Approximately 12% of articles had ethics as a key word and just over 8% had plagiarism. Not only is it logical that addressing ethical issues would produce articles with a negative sentiment, but these articles may also be published faster.

There is an increasing number of articles using LLMs and other AI-based solutions to benchmark hypothetical physical theories (Adesso, 2023), to process data, or for integration into medical practice. However, these studies usually take more time to conduct and, in the case of those involving humans or animals, have additional delays in receiving research ethics approval. Despite medicine being the largest category, the majority of articles were theoretical and discussed possible applications of ChatGPT. Necessarily, these types of articles address potential problems, whereas later scientific articles may focus more on solutions and therefore show a more positive sentiment. Not only is it logical that new technology would be treated with skepticism in the academic world, but it perhaps should not be surprising that early literature addresses the ethical concerns of researchers and postulates the problems that will need to be addressed in future research.

In healthcare settings, virtual assistants can help patients schedule appointments (Chow et al., 2023), answer medical questions, and even monitor vital signs. Use of AI in the medical context has also been a focus of literature even outside on context of ChatGPT (Merhbene et al., 2022). However, the limitations of current models in terms of their ability to handle complex social interactions, emotional expressions, and cultural nuances that are essential for effective communication with humans need to be addressed before widespread implementation can occur. The literature suggests that the technology is not yet ready for clinical use, due to its limited ability and privacy issues (Au Yeung et al., 2023; De Angelis et al., 2023) and legal concerns (Dave et al., 2023). As researchers continue to explore this new technology, it will be important to consider both the benefits and drawbacks of chatbot technology in order to fully understand its implications for future research and practical applications.

There is a notable lack of legal scholarship addressing ChatGPT and large language models which is surprising considering legal considerations are addressed in many of the articles. This, however, may be explained by SCOPUS's lack of legal coverage. Law articles tend to have low citation rates as they cite the law itself more than other articles and may be local in nature (Eisenberg and Wells, 1998). Therefore, aside from law and society topics that recieve higher citations rates, legal scholarship is largely ignored by SCOPUS and the Web of Science databases. Nevertheless, in additional to ethical issues such as plagiarism, legal issues including intellectual property rights are often discussed (D'Amico et al., 2023). Intellectual property is perhaps a greater issue with AI creating visual art than with most outputs for LLMs, especially if the LLM is trained on a sufficiently large dataset. Additionally, ensuring accuracy is arguably even a larger risk than plagiarism. D'Amico et al. (2023) state that “ChatGPT had been listed as the first author of four papers, without considering the possibility of ‘involuntary plagiarism' or intellectual property issues surrounding the output of the model.” The approach taken by these papers (ChatGPT Generative Pre-trained Transformer and Zhavoronkov, 2022; O'Connor, 2022; King and ChatGPT, 2023) is understandable considering it is unknown what standards will be adopted in the future. However, using the output is not all that different from pulling from one's own knowledge. Academics must cite all sources of information not only for ethical reasons but also because it strengthens the claims of a paper. Not only does a failure of cite a source of information constitute plagiarism, but it weakens the paper. However, over time people learn and they may make statements without remember the original source. Thus, as something becomes common knowledge the source becomes less likely to be cited. When using LLMs, if something is outside of the knowledge of an author, they will need to look it up and in so doing will be ethically compelled to cite the source confirming the knowledge. We therefore argue that the primary danger is that authors will publish material produced by an LLM without ensuring its accuracy. It is not unethical to use an LLM, but authors must ensure the veracity of the final work regardless of whether they use an LLM like ChatGPT, the built-in spelling and grammar checking software, other in text editing software like MS Word, or other AI solutions to assist with writing. More research, therefore, should focus on the risk of fake resources, including journals publishing articles falsely purporting to be from famous academics, a problem that will undoubtedly increase with the proliferation of LLM technology.

One surprising factor was the geographic universality of the findings. As can be seen in Figure 1, the top 25 countries by authorship included all six inhabited continents. In fact, the top three countries, the United States, United Kingdom, and India, are each on different continents. While there is a stronger representation of English-speaking countries, mainland China ranks fourth in the number of authors. It is perhaps not surprising that despite not being English speaking countries, China, Japan and South Korea, all leaders in technological development, would be amongst the top 15 regions.

Figure 1

Figure 1

Authorship by country.

Overall, the early literature on ChatGPT suggests that while it has great potential for improving communication across various industries, there are still many questions to be answered before its full impact can be realized. As researchers continue to explore this new technology, it will be important to consider both the benefits and drawbacks of chatbot technology to fully understand its implications for future research and practical applications. For instance, while ChatGPT has the potential to improve customer service by providing quick responses to frequently asked questions, there is a risk that customers may become frustrated if they encounter complex issues that cannot be resolved through automation. Additionally, chatbot technology may not be suitable for all industries or contexts, and it will be important to identify which applications are most effective in different settings. As ChatGPT continues to evolve and become more advanced, researchers must remain vigilant about the ethical considerations associated with its use, including privacy (Masters, 2023a) concerns, bias in data sets used by chatbots (Thornhill et al., 2019), transparency, accountability, and cultural sensitivity. By addressing these issues head-on, we can ensure that ChatGPT and similar solutions are deployed responsibly and effectively and the fact that all disciplines show negative sentiment toward ChatGPT in the early literature implies scholars are embracing this cautious approach.

5. Conclusion

In conclusion, the early literature on ChatGPT suggests that while there are promising results for its potential applications in various fields, there are also significant ethical considerations to be addressed before widespread implementation can occur. The negative sentiment across all academic areas related to early ChatGPT literature may be explained by limitations in current research or ethical concerns related to the use of GPT technology. As ChatGPT is still a relatively new technology, there are many questions about its capabilities and limitations that need to be addressed by researchers across different fields. The geographical dispersion and standing in university ranking of authors' institutions signals the interest is global in scope and a matter of importance for all sorts of institutions. In addition, the lack of comprehensive studies or datasets that can provide more nuanced insights into its capabilities and limitations beyond simple language processing tasks may contribute to negative sentiment across different disciplines. Overall, while early research is still limited by the scope of available data, there are already some clear implications for future research and practical applications in various fields. As ChatGPT technology continues to evolve, it will be important for researchers and stakeholders to work together to address these ethical considerations and ensure that this powerful tool is used responsibly and effectively across different industries.

Statements

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

  • 1

    Abdel-MessihM. S.Kamel BoulosM. N. (2023). ChatGPT in clinical toxicology. JMIR Med. Educ. 9. 10.22541/au.167715747.75006360/v1

  • 2

    AdessoG. (2023). Towards the ultimate brain: exploring scientific discovery with Chatgpt Ai. Authorea. preprints. 10.22541/au.167701309.98216987/v1

  • 3

    AdetayoA. J. (2023). Artificial intelligence chatbots in academic libraries: the rise of ChatGPT. Library Hi Tech News40, 1821. 10.1108/LHTN-01-2023-0007

  • 4

    AhnC. (2023). Exploring ChatGPT for information of cardiopulmonary resuscitation. Resuscitation185, 109729. 10.1016/j.resuscitation.2023.109729

  • 5

    AiH.ZhouZ.YanY. (2023). The impact of Smart city construction on labour spatial allocation: evidence from China. Appl. Econ. 10.1080/00036846.2023.2186367

  • 6

    AlbertsI. L.MercolliL.PykaT.PrenosilG.ShiK.RomingerA.et al. (2023). Large language models (LLM) and ChatGPT: what will the impact on nuclear medicine be?Eur. J. Nucl. Med. Mol. Imaging50, 15491552. 10.1007/s00259-023-06172-w

  • 7

    AliM. J.DjalilianA. (2023). Readership awareness series–paper 4: Chatbots and ChatGPT - ethical considerations in scientific publications. Semin. Ophthalmol. 38, 403404. 10.1080/08820538.2023.2193444

  • 8

    AliS. R.DobbsT. D.HutchingsH. A.WhitakerI. S. (2023). Using ChatGPT to write patient clinic letters. Lancet Digit. Health5, e179e181. 10.1016/S2589-7500(23)00048-1

  • 9

    AljanabiM.GhaziM.AliA. H.AbedS. A.ChatGpt. (2023). ChatGpt: open possibilities. Iraqi J. Comput. Sci. Math.4, 6264. 10.52866/20ijcsm.2023.01.01.0018

  • 10

    AmeenN.VigliaG.AltinayL. (2023). Revolutionizing services with cutting-edge technologies post major exogenous shocks. Serv. Ind. J.43, 125133. 10.1080/02642069.2023.2185934

  • 11

    AnJ.DingW.LinC. (2023). ChatGPT: tackle the growing carbon footprint of generative AI. Nature615, 586. 10.1038/d41586-023-00843-2

  • 12

    AndersB. A. (2023). Is using ChatGPT cheating, plagiarism, both, neither, or forward thinking?Patterns4, 100694. 10.1016/j.patter.2023.100694

  • 13

    AndersonN.BelavyD. L.PerleS. M.HendricksS.HespanholL.VerhagenE.et al. (2023). AI did not write this manuscript, or did it? Can we trick the AI text detector into generated texts? The potential future of ChatGPT and AI in sports and exercise medicine manuscript generation. BMJ Open Sport Exerc. Med.9, e001568. 10.1136/bmjsem-2023-001568

  • 14

    AngT. L.ChoolaniM.SeeK. C.PohK. K. (2023). The rise of artificial intelligence: addressing the impact of large language models such as ChatGPT on scientific publications. Singapore Med. J.64, 219221. 10.4103/singaporemedj.SMJ-2023-055

  • 15

    ArifT. B.MunafU.Ul-HaqueI. (2023). The future of medical education and research: is ChatGPT a blessing or blight in disguise?Med. Educ. Online28, 2181052. 10.1080/10872981.2023.2181052

  • 16

    Au YeungJ.KraljevicZ.LuintelA.BalstonA.IdowuE.DobsonR. J.et al. (2023). AI chatbots not yet ready for clinical use. Front. Digit. Health5, 1161098. 10.3389/fdgth.2023.1161098

  • 17

    BergerU.SchneiderN. (2023). How ChatGPT will change research, education and healthcare?PPmP Psychother. Psychosom. Med. Psychol.73, 159161. 10.1055/a-2017-8471

  • 18

    BernsteinJ. (2023). Not the Last Word: ChatGPT can't perform orthopaedic surgery. Clin. Orthop. Relat. Res.481, 651655. 10.1097/CORR.0000000000002619

  • 19

    BhatiaG.KulkarniA. (2023). ChatGPT as co-author: are researchers impressed or distressed?Asian J. Psychiatr.84, 103564. 10.1016/j.ajp.2023.103564

  • 20

    BhattacharyaK.BhattacharyaA. S.BhattacharyaN.YagnikV. D.GargP.KumarS.et al. (2023). ChatGPT in surgical practice—a new kid on the block. Indian J. Surg. 10.1007/s12262-023-03727-x

  • 21

    BiswasS. S. (2023a). Potential use of Chat GPT in global warming. Ann. Biomed. Eng.51, 11261112. 10.1007/s10439-023-03171-8

  • 22

    BiswasS. S. (2023b). Role of Chat GPT in public health. Ann. Biomed. Eng.51, 868869. 10.1007/s10439-023-03172-7

  • 23

    BorgesR. M. (2023). A braver new world? Of chatbots and other cognoscenti. J. Biosci.48, 10. 10.1007/s12038-023-00334-6

  • 24

    BoßelmannC. M.LeuC.LalD. (2023). Are AI language models such as ChatGPT ready to improve the care of individuals with epilepsy?Epilepsia64, 11951199. 10.1111/epi.17570

  • 25

    BudlerL. C.GosakL.StiglicG. (2023). Review of artificial intelligence-based question-answering systems in healthcare. Wiley Interdiscip. Rev. Data Min. Knowl. Discov.13, e1487. 10.1002/widm.1487

  • 26

    CahanP.TreutleinB. (2023). A conversation with ChatGPT on the role of computational systems biology in stem cell research. Stem Cell Rep.18, 12. 10.1016/j.stemcr.2022.12.009

  • 27

    CascellaM.MontomoliJ.BelliniV.BignamiE. (2023). Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios. J. Med. Syst.47, 33. 10.1007/s10916-023-01925-4

  • 28

    Castro NascimentoC. M.PimentelA. S. (2023). Do large language models understand chemistry? A conversation with ChatGPT. J. Chem. Inf. Model. 63, 16491655. 10.1021/acs.jcim.3c00285

  • 29

    ChatGPT Generative Pre-trained Transformer and Zhavoronkov A.. (2022). Rapamycin in the context of Pascal's Wager: generative pre-trained transformer perspective. Oncoscience9, 82. 10.18632/oncoscience.571

  • 30

    ChatterjeeJ.DethlefsN. (2023). This new conversational AI model can be your friend, philosopher, and guide… even your worst enemy. Patterns4, 100676. 10.1016/j.patter.2022.100676

  • 31

    ChenX. (2023). ChatGPT and its possible impact on library reference services. Internet Ref. Serv. Q.27, 121129. 10.1080/10875301.2023.2181262

  • 32

    ChoiE. P. H.LeeJ. J.HoM. H.KwokJ. Y. Y.LokK. Y. W. (2023). Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education. Nurse Educ. Today125, 105796. 10.1016/j.nedt.2023.105796

  • 33

    ChowJ. C. L.SandersL.LiK. (2023). Impact of ChatGPT on medical chatbots as a disruptive technology. Front. Artif. Intell.6, 1166014. 10.3389/frai.2023.1166014

  • 34

    CooperG. (2023). Examining science education in ChatGPT: an exploratory study of generative artificial intelligence. J. Sci. Educ. Technol. 32, 444452. 10.1007/s10956-023-10039-y

  • 35

    CostelloE. (2023). ChatGPT and the educational AI chatter: full of bullshit or trying to tell us something?Postdigital Sci. Educ. 2, 863878. 10.1007/s42438-023-00398-5

  • 36

    CottonD. R. E.CottonP. A.ShipwayJ. R. (2023). Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innov. Educ. Teach. Int. 10.1080/14703297.2023.2190148

  • 37

    Cox JrL. A. (2023). Causal reasoning about epidemiological associations in conversational AI. Glob. Epidemiol.5, 16. 10.1016/j.gloepi.2023.100102

  • 38

    CoxC.TzocE. (2023). ChatGPT Implications for academic libraries. Coll. Res. Libr. News84, 99102. 10.5860/crln.84.3.99

  • 39

    CrawfordJ.CowlingM.AllenK. A. (2023). Leadership is needed for ethical ChatGPT: character, assessment, and learning using artificial intelligence (AI). J. Univ. Teach. Learn. Pract. 20. 10.53761/1.20.3.02

  • 40

    CurtisN. (2023). To ChatGPT or not to ChatGPT? The impact of artificial intelligence on academic publishing. Pediatr. Infect. Dis. J.42, 275. 10.1097/INF.0000000000003852

  • 41

    DahmenJ.KayaalpM. E.OllivierM.PareekA.HirschmannM. T.KarlssonJ.et al. (2023). Artificial intelligence bot ChatGPT in medical research: the potential game changer as a double-edged sword. Knee Surg. Sports Traumatol. Arthrosc.31, 11871189. 10.1007/s00167-023-07355-6

  • 42

    D'AmicoR. S.WhiteT. G.ShahH. A.LangerD. J. (2023). I Asked a ChatGPT to write an editorial about how we can incorporate chatbots into neurosurgical research and patient care…. Neurosurgery92, 663664. 10.1227/neu.0000000000002414

  • 43

    DasboroughM. T. (2023). Awe-inspiring advancements in AI: the impact of ChatGPT on the field of organizational behavior. J. Organ. Behav.44, 177179. 10.1002/job.2695

  • 44

    DaveT.AthaluriS. A.SinghS. (2023). ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front. Artif. Intell.6, 1169595. 10.3389/frai.2023.1169595

  • 45

    De AngelisL.BaglivoF.ArzilliG.PriviteraG. P.FerraginaP.TozziA. E.et al. (2023). ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Front. Public Health11, 1166120. 10.3389/fpubh.2023.1166120

  • 46

    DiGiorgioA. M.EhrenfeldJ. M. (2023). Artificial intelligence in medicine and ChatGPT: de-tether the physician. J. Med. Syst.47, 32. 10.1007/s10916-023-01926-3

  • 47

    DonatoH.EscadaP.VillanuevaT. (2023). The transparency of science with ChatGPT and the emerging artificial intelligence language models: where should medical journals stand?Acta Med. Port.36, 147148. 10.20344/amp.19694

  • 48

    DoshiR. H.BajajS. S.KrumholzH. M. (2023). ChatGPT: temptations of progress. Am. J. Bioethics23, 68. 10.1080/15265161.2023.2180110

  • 49

    DowlingM.LuceyB. (2023). ChatGPT for (Finance) research: the bananarama conjecture. Finance Res. Lett.53, 103662. 10.1016/j.frl.2023.103662

  • 50

    DuH.TengS.ChenH.MaJ.WangX.GouC.et al. (2023). Chat with ChatGPT on intelligent vehicles: an IEEE TIV perspective. IEEE Trans. Intell. Veh. 8, 20202026. 10.1109/TIV.2023.3253281

  • 51

    DwivediY. K.KshetriN.HughesL.SladeE. L.JeyarajA.KarA. K.et al. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int. J. Inf. Manage.71, 102642. 10.1016/j.ijinfomgt.2023.102642

  • 52

    EardleyI. (2023). ChatGPT: what does it mean for scientific research and publishing?BJU Int.131, 381382. 10.1111/bju.15995

  • 53

    EisenbergT.WellsM. T. (1998). Ranking and explaining the scholarly impact of law schools. J. Legal Stud.27, 373413. 10.1086/468024

  • 54

    ElaliF. R.RachidL. N. (2023). AI-generated research paper fabrication and plagiarism in the scientific community. Patterns4, 100706. 10.1016/j.patter.2023.100706

  • 55

    ElseH. (2023). Abstracts written by ChatGPT fool scientists. Nature613, 423. 10.1038/d41586-023-00056-7

  • 56

    ElwoodT. W. (2023). Technological impacts on the sphere of professional journals. J Allied Health 52, 1. Available online at: https://pubmed.ncbi.nlm.nih.gov/36892853/

  • 57

    EmenikeM. E.EmenikeB. U. (2023). Was this title generated by ChatGPT? Considerations for artificial intelligence text-generation software programs for chemists and chemistry educators. J. Chem. Educ. 100, 14131418. 10.1021/acs.jchemed.3c00063

  • 58

    EysenbachG. (2023). The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med. Educ.9, e46885. 10.2196/46885

  • 59

    FergusS.BothaM.OstovarM. (2023). Evaluating academic answers generated using ChatGPT. J. Chem. Educ. 100, 16721675. 10.1021/acs.jchemed.3c00087

  • 60

    FernandezP. (2023). “Through the looking glass: envisioning new library technologies” AI-text generators as explained by ChatGPT. Library Hi Tech News. 40, 1114. 10.1108/LHTN-02-2023-0017

  • 61

    FijačkoN.GosakL.ŠtiglicG.PicardC. T.John DoumaM. (2023). Can ChatGPT pass the life support exams without entering the American heart association course?Resuscitation185, 109732. 10.1016/j.resuscitation.2023.109732

  • 62

    FloridiL. (2023). AI as agency without intelligence: on ChatGPT, large language models, and other generative models. Philos. Technol.36, 15. 10.1007/s13347-023-00621-y

  • 63

    GancarczykM.ŁasakP.GancarczykJ. (2022). The fintech transformation of banking: governance dynamics and socio-economic outcomes in spatial contexts. Entrep. Bus. Econ. Rev.10, 143165. 10.15678/EBER.2022.100309

  • 64

    GaoY.TongW.WuE. Q.ChenW.ZhuG.WangF.et al. (2023). Chat with ChatGPT on interactive engines for intelligent driving. IEEE Trans. Intell. Veh. 8, 2034–2036. 10.1109/TIV.2023.3252571

  • 65

    GaševićD.SiemensG.SadiqS. (2023). Empowering learners for the age of artificial intelligence. Comput. Educ. Artif. Intell.4, 100130. 10.1016/j.caeai.2023.100130

  • 66

    GilsonA.SafranekC. W.HuangT.SocratesV.ChiL.TaylorR. A.et al. (2023). How does ChatGPT perform on the united states medical licensing examination? The implications of large language models for medical education and knowledge assessment. JMIR Med. Educ.9, e45312. 10.2196/45312

  • 67

    GordijnB.HaveH. (2023). ChatGPT: evolution or revolution?Med. Health Care Philos.26, 12. 10.1007/s11019-023-10136-0

  • 68

    GrafA.BernardiR. E. (2023). ChatGPT in research: balancing ethics, transparency and advancement. Neuroscience515, 7173. 10.1016/j.neuroscience.2023.02.008

  • 69

    GrahamF. (2022). Daily briefing: will ChatGPT kill the essay assignment? Nature10.1038/d41586-022-04437-2

  • 70

    GrahamF. (2023a). Daily briefing: science urgently needs a plan for ChatGPT. Nature10.1038/d41586-023-00360-2

  • 71

    GrahamF. (2023b). Daily briefing: the science underlying the Turkey–Syria earthquake. Nature10.1038/d41586-023-00373-x

  • 72

    GregorcicB.PendrillA. M. (2023). ChatGPT and the frustrated socrates. Phys. Educ.58, 035021. 10.1088/1361-6552/acc299

  • 73

    GunawanJ. (2023). Exploring the future of nursing: insights from the ChatGPT model. Belitung Nurs. J.9, 15. 10.33546/bnj.2551

  • 74

    HalloranL. J. S.MhannaS.BrunnerP. (2023). AI tools such as ChatGPT will disrupt hydrology, too. Hydrol. Process. 37. 10.1002/hyp.14843

  • 75

    HallsworthJ. E.UdaondoZ.Pedrós-AlióC.HöferJ.BenisonK. C.LloydK. G.et al. (2023). Scientific novelty beyond the experiment. Microb. Biotechnol. 16, 11311173. 10.1111/1751-7915.14222

  • 76

    HaluzaD.JungwirthD. (2023). Artificial intelligence and ten societal megatrends: an exploratory study using GPT-3. Systems11, 120. 10.3390/systems11030120

  • 77

    HamanM.ŠkolníkM. (2023). Using ChatGPT to conduct a literature review. Account. Res. 1–3. 10.1080/08989621.2023.2185514

  • 78

    HarderN. (2023). Using ChatGPT in simulation design: what can (or should) it do for you?Clin. Simul. Nurs.78, A1A2. 10.1016/j.ecns.2023.02.011

  • 79

    HirosawaT.HaradaY.YokoseM.SakamotoT.KawamuraR.ShimizuT.et al. (2023). Diagnostic accuracy of differential-diagnosis lists generated by generative pretrained transformer 3 chatbot for clinical vignettes with common chief complaints: a pilot study. Int. J. Environ. Res. Public Health20, 3378. 10.3390/ijerph20043378

  • 80

    HolzingerA.KeiblingerK.HolubP.ZatloukalK.MüllerH. (2023). AI for life: trends in artificial intelligence for biotechnology. Nat. Biotechnol.74, 1624. 10.1016/j.nbt.2023.02.001

  • 81

    HomolakJ. (2023). Opportunities and risks of ChatGPT in medicine, science, and academic publishing: a modern Promethean dilemma. Croat. Med. J.64, 13. 10.3325/cmj.2023.64.1

  • 82

    HuG. (2023). Challenges for enforcing editorial policies on AI-generated papers. Account. Res. 1–3. 10.1080/08989621.2023.2184262

  • 83

    HuangJ.YeungA. M.KerrD.KlonoffD. C. (2023). Using ChatGPT to predict the future of diabetes technology. J. Diabet. Sci. Technol. 17, 853854. 10.1177/19322968231161095

  • 84

    HuhS. (2023a). Are ChatGPT's knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study. J. Educ. Eval. Health Prof.20, 1. 10.3352/jeehp.2023.20.01

  • 85

    HuhS. (2023b). Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic. Sci. Edit.10, 14. 10.6087/kcse.290

  • 86

    HuhS. (2023c). Issues in the 3rd year of the COVID-19 pandemic, including computer-based testing, study design, ChatGPT, journal metrics, and appreciation to reviewers. J. Educ. Eval. Health Prof.20, 5. 10.3352/jeehp.2023.20.5

  • 87

    HumphryT.FullerA. L. (2023). Potential ChatGPT use in undergraduate chemistry laboratories. J. Chem. Educ. 100, 14341436. 10.1021/acs.jchemed.3c00006

  • 88

    IskenderA. (2023). Holy or unholy? Interview with open AI's ChatGPT. Eur. J. Tour. Res.34, 3414. 10.54055/ejtr.v34i.3169

  • 89

    JohinkeR.CummingsR.Di LauroF. (2023). Reclaiming the technology of higher education for teaching digital writing in a post—pandemic world. J. Univ. Teach. Learn. Pract. 20. 10.53761/1.20.02.01

  • 90

    JungwirthD.HaluzaD. (2023). Artificial intelligence and public health: an exploratory study. Int. J. Environ. Res. Public Health20, 4541. 10.3390/ijerph20054541

  • 91

    KahambingJ. G. (2023). ChatGPT, ‘polypsychic' artificial intelligence, and psychiatry in museums. Asian J. Psychiatr.83, 103548. 10.1016/j.ajp.2023.103548

  • 92

    KapengutE.MizrachB. (2023). An event study of the ethereum transition to proof-of-stake. Commodities2, 96110. 10.3390/commodities2020006

  • 93

    KaraaliG. (2023). Artificial intelligence, basic skills, and quantitative literacy. Numeracy16, 9. 10.5038/1936-4660.16.1.1438

  • 94

    KarimabadiH.WilkesJ.RobertsD. A. (2023). The need for adoption of neural HPC (NeuHPC) in space sciences. Front. Astron. Space Sci.10, 1120389. 10.3389/fspas.2023.1120389

  • 95

    KasneciE.SesslerK.KüchemannS.BannertM.DementievaD.FischerF.et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ.103, 102274. 10.1016/j.lindif.2023.102274

  • 96

    KhanR. A.JawaidM.KhanA. R.SajjadM. (2023). ChatGPT-Reshaping medical education and clinical management. Pak. J. Med. Sci.39, 605607. 10.12669/pjms.39.2.7653

  • 97

    KimS. G. (2023). Using ChatGPT for language editing in scientific articles. Maxillofac. Plast. Reconstr. Surg.45, 13. 10.1186/s40902-023-00381-x

  • 98

    KingM. R.ChatGPT (2023). A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cell. Mol. Bioeng.16, 12. 10.1007/s12195-022-00754-8

  • 99

    KrettekC. (2023). ChatGPT: milestone text AI with game changing potential. Unfallchirurgie126, 252254. 10.1007/s00113-023-01296-y

  • 100

    Kurpicz-BrikiM.LeoniT. (2021). A world full of stereotypes? Further investigation on origin and gender bias in multi-lingual word embeddings. Front. Big Data4, 625290. 10.3389/fdata.2021.625290

  • 101

    LahatA.KlangE. (2023). Can advanced technologies help address the global increase in demand for specialized medical care and improve telehealth services? J. Telemed. Telecare. 10.1177/1357633X231155520

  • 102

    LahatA.ShacharE.AvidanB.ShatzZ.GlicksbergB. S.KlangE.et al. (2023). Evaluating the use of large language model in identifying top research questions in gastroenterology. Sci. Rep.13, 4164. 10.1038/s41598-023-31412-2

  • 103

    LambaH.RodolfaK. T.GhaniR. (2021). An empirical comparison of bias reduction methods on real-world problems in high-stakes policy settings. SIGKDD Explor. Newsl.23, 6985. 10.1145/3468507.3468518

  • 104

    LawuobahsumoK. K.AlgieriB.IaniaL.LeccaditoA. (2022). Exploring dependence relationships between bitcoin and commodity returns: an assessment using the gerber cross-correlation. Commodities1, 3449. 10.3390/commodities1010004

  • 105

    LeclerA.DuronL.SoyerP. (2023). Revolutionizing radiology with GPT-based models: current applications, future possibilities and limitations of ChatGPT. Diagn. Interv. Imaging104, 269274. 10.1016/j.diii.2023.02.003

  • 106

    LeeJ. Y. (2023a). Can an artificial intelligence chatbot be the author of a scholarly article?J. Educ. Eval. Health Prof.20, 6. 10.3352/jeehp.2023.20.6

  • 107

    LeeJ. Y. (2023b). Can an artificial intelligence chatbot be the author of a scholarly article?Sci. Edit.10, 712. 10.6087/kcse.292

  • 108

    LevinG.MeyerR.KadochE.BrezinovY. (2023). Identifying ChatGPT-written OBGYN abstracts using a simple tool. Am. J. Obstet. Gynecol. MFM5, 100936. 10.1016/j.ajogmf.2023.100936

  • 109

    LiangY.WattersC.LemańskiM. K. (2022). Responsible management in the hotel industry: an integrative review and future research directions. Sustainability14, 17050. 10.3390/su142417050

  • 110

    LiebrenzM.SchleiferR.BuadzeA.BhugraD.SmithA. (2023). Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit. Health5, e105e106. 10.1016/S2589-7500(23)00019-5

  • 111

    LimW. M.GunasekaraA.PallantJ. L.PallantJ. I.PechenkinaE. (2023). Generative AI and the future of education: ragnarök or reformation? A paradoxical perspective from management educators. Int. J. Manag. Educ.21, 100790. 10.1016/j.ijme.2023.100790

  • 112

    LinC. C.HuangA. Y. Q.YangS. J. H. (2023). A review of AI-driven conversational chatbots implementation methodologies and challenges (1999–2022). Sustainability 15, 4012. 10.3390/su15054012

  • 113

    LooiM. K. (2023). Sixty seconds on . . . ChatGPT. BMJ380, 205. 10.1136/bmj.p205

  • 114

    LundB. D.WangT. (2023). Chatting about ChatGPT: how may AI and GPT impact academia and libraries? Library Hi Tech News. 74. 10.2139/ssrn.4333415

  • 115

    LundB. D.WangT.MannuruN. R.NieB.ShimrayS.WangZ.et al. (2023). ChatGPT and a new academic reality: artificial intelligence-written research papers and the ethics of the large language models in scholarly publishing. J. Assoc. Inf. Sci. Technol. 74, 570581. 10.1002/asi.24750

  • 116

    MacdonaldC.AdeloyeD.SheikhA.RudanI. (2023). Can ChatGPT draft a research article? An example of population-level vaccine effectiveness analysis. J. Glob. Health13, 01003. 10.7189/jogh.13.01003

  • 117

    MaekerE.Maeker-PoquetB. (2023). ChatGPT: a solution for producing medical literature reviews?NPG Neurol. Psychiat. Geriatr.23, 137143. 10.1016/j.npg.2023.03.002

  • 118

    MannD. L. (2023). Artificial intelligence discusses the role of artificial intelligence in translational medicine: a JACC: basic to translational science interview with ChatGPT. JACC: Basic Transl. Sci.8, 221223. 10.1016/j.jacbts.2023.01.001

  • 119

    MastersK. (2023a). Ethical use of artificial intelligence in health professions education: AMEE Guide No.158. Med. Teach. 45, 574584. 10.1080/0142159X.2023.2186203

  • 120

    MastersK. (2023b). Response to: Aye, AI! ChatGPT passes multiple-choice family medicine exam. Med. Teach. 45, 666. 10.1080/0142159X.2023.2190476

  • 121

    MerhbeneG.NathS.PuttickA. R.Kurpicz-BrikiM. (2022). BURNOUT ensemble: augmented intelligence to detect indications for burnout in clinical psychology. Front. Big Data5, 86310. 10.3389/fdata.2022.863100

  • 122

    MijwilM. M.AljanabiM.ChatGPT. (2023). Towards artificial intelligence-based cybersecurity: the practices and ChatGPT generated ways to combat cybercrime. Iraqi J. Comput. Sci. Math.4, 6570. 10.52866/ijcsm.2023.01.01.0019

  • 123

    MogaliS. R. (2023). Initial impressions of ChatGPT for anatomy education. Anat. Sci. Educ. 10.1002/ase.2261

  • 124

    MoissetX.CiampiD. de Andrade. (2023). Neuro-ChatGPT? Potential threats and certain opportunities. Rev. Neurol.179, 517519. 10.1016/j.neurol.2023.02.066

  • 125

    MorreelS.MathysenD.VerhoevenV. (2023). Aye, AI! ChatGPT passes multiple-choice family medicine exam. Med. Teach. 45, 665666. 10.1080/0142159X.2023.2187684

  • 126

    NaumovaE. N. (2023). A mistake-find exercise: a teacher's tool to engage with information innovations, ChatGPT, and their analogs. J. Public Health Policy44, 173178. 10.1057/s41271-023-00400-1

  • 127

    NautiyalR.AlbrechtJ. N.NautiyalA. (2023). ChatGPT and tourism academia. Ann. Tour. Res.99, 103544. 10.1016/j.annals.2023.103544

  • 128

    NuryanaZ.PranoloA. (2023). ChatGPT: the balance of future, honesty, and integrity. Asian J. Psychiatr.84, 103571. 10.1016/j.ajp.2023.103571

  • 129

    O'ConnorS. (2022). Open artificial intelligence platforms in nursing education: tools for academic progress or abuse?Nurse Educ. Pract.66, 103537103537. 10.1016/j.nepr.2022.103537

  • 130

    O'ConnorS. (2023). Corrigendum to “open artificial intelligence platforms in nursing education: tools for academic progress or abuse?” [Nurse Educ. Pract. 66, 103537.] Nurse Education in Practice (2023) 66, (S1471595322002517), (10.1016/j.nepr.2022.103537. Nurse Educ. Pract.67, 103572. 10.1016/j.nepr.2023.103572

  • 131

    Odom-ForrenJ. (2023). The role of ChatGPT in perianesthesia nursing. J. Perianesth. Nurs.38, 176177. 10.1016/j.jopan.2023.02.006

  • 132

    OllivierM.PareekA.DahmenJ.KayaalpM. E.WinklerP. W.HirschmannM. T.et al. (2023). A deeper dive into ChatGPT: history, use and future perspectives for orthopaedic research. Knee Surg. Sports Traumatol. Arthrosc.31, 11901192. 10.1007/s00167-023-07372-5

  • 133

    OwensB. (2023). How Nature readers are using ChatGPT. Nature615, 20. 10.1038/d41586-023-00500-8

  • 134

    PandaS.KaurN. (2023). Exploring the viability of ChatGPT as an alternative to traditional chatbot systems in library and information centers. Library Hi Tech News. 40, 2255. 10.1108/LHTN-02-2023-0032

  • 135

    ParkI.JoshiA. S.JavanR. (2023). Potential role of ChatGPT in clinical otolaryngology explained by ChatGPT. Am. J. Otolaryngol. Head Neck Med. Surg.44, 103873. 10.1016/j.amjoto.2023.103873

  • 136

    PatelS. B.LamK. (2023). ChatGPT: the future of discharge summaries?Lancet Digit. Health5, e107e108. 10.1016/S2589-7500(23)00021-3

  • 137

    PaulJ.UenoA.DennisC. (2023). ChatGPT and consumers: benefits, pitfalls and future research agenda. Int. J. Consum. Stud. 47, 12131225. 10.1111/ijcs.12928

  • 138

    PavlikJ. V. (2023). Collaborating with ChatGPT: considering the implications of generative artificial intelligence for journalism and media education. Journal. Mass Commun. Educ.78, 8493. 10.1177/10776958221149577

  • 139

    PerkinsM. (2023). Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond. J. Univ. Teach. Learn. Pract. 20. 10.53761/1.20.02.07

  • 140

    PotapenkoI.Boberg-AnsL. C.Stormly HansenM.KlefterO. N.van DijkE. H. C.SubhiY. (2023). Artificial intelligence-based chatbot patient information on common retinal diseases using ChatGPT. Acta Ophthalmol. 10.1111/aos.15661

  • 141

    PradaP.PerroudN.ThorensG. (2023). Artificial intelligence and psychiatry: questions from psychiatrists to ChatGPT. Rev. Med. Suisse19, 532536. 10.53738/REVMED.2023.19.818.532

  • 142

    QadirJ. (2022). Engineering Education in the Era of ChatGPT: Promise and Pitfalls of Generative AI for Education. Available online at: https://www.researchgate.net/publication/366712815_Engineering_Education_in_the_Era_of_ChatGPT_Promise_and_Pitfalls_of_Generative_AI_for_Education

  • 143

    Quintans-JúniorL. J.GurgelR. Q.AraújoA. A. S.CorreiaD.Martins-FilhoP. R. (2023). ChatGPT: the new panacea of the academic world. Rev. Soc. Bras. Med. Trop.56, e0060. 10.1590/0037-8682-0060-2023

  • 144

    RahimiF.Talebi Bezmin AbadiA. (2023). ChatGPT and publication ethics. Arch. Med. Res.54, 272274. 10.1016/j.arcmed.2023.03.004

  • 145

    RilligM. C.ÅgerstrandM.BiM.GouldK. A.SauerlandU. (2023). Risks and benefits of large language models for the environment. Environ. Sci. Technol.57, 34643466. 10.1021/acs.est.3c01106

  • 146

    RoccaR.TamagnoneN.FekihS.ContlaX.RekabsazN. (2023). Natural language processing for humanitarian action: opportunities, challenges, and the path toward humanitarian NLP. Front. Big Data6, 1082787. 10.3389/fdata.2023.1082787

  • 147

    RospigliosiP. A. (2023). Artificial intelligence in teaching and learning: what questions should we ask of ChatGPT?Interact. Learn. Environ.31, 13. 10.1080/10494820.2023.2180191

  • 148

    RoyK.GaurM.SoltaniM.RawteV.KalyanA.ShethA.et al. (2023). ProKnow: process knowledge for safety constrained and explainable question generation for mental health diagnostic assistance. Front. Big Data5, 1056728. 10.3389/fdata.2022.1056728

  • 149

    RozencwajgS.KantorE. (2023). Elevating scientific writing with ChatGPT: a guide for reviewers, editors… and authors. Anaesth. Crit. Care Pain Med.42, 101209. 10.1016/j.accpm.2023.101209

  • 150

    SallamM. (2023). ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare11, 887. 10.3390/healthcare11060887

  • 151

    SalvagnoM.TacconeF. S.GerliA. G. ChatGpt. (2023). Can artificial intelligence help for scientific writing?Crit. Care27, 75. 10.1186/s13054-023-04380-2

  • 152

    SardanaD.FaganT. R.WrightJ. T. (2023). ChatGPT: a disruptive innovation or disrupting innovation in academia?J. Am. Dent. Assoc.154, 361364. 10.1016/j.adaj.2023.02.008

  • 153

    ScerriA.MorinK. H. (2023). Using chatbots like ChatGPT to support nursing practice. J. Clin. Nurs.32, 42114213. 10.1111/jocn.16677

  • 154

    SchijvenM. P.KikkawaT. (2023). Why is serious gaming important? Let's Have a Chat!”Simul. Gaming54, 147149. 10.1177/10468781231152682

  • 155

    SchorrleppM.PatzerK. H. (2023). ChatGPT in der hausarztpraxis: die künstliche intelligenz im check. MMW Fortschr. Med.165, 1216. 10.1007/s15006-023-2473-3

  • 156

    SeghierM. L. (2023). ChatGPT: not all languages are equal. Nature615, 216. 10.1038/d41586-023-00680-3

  • 157

    SethI.BullochG.Angus LeeC. H. (2023). Redefining academic integrity, authorship, and innovation: the impact of ChatGPT on surgical research. Ann. Surg. Oncol.30, 52845285. 10.1245/s10434-023-13642-w

  • 158

    ShortC. E.ShortJ. C. (2023). The artificially intelligent entrepreneur: ChatGPT, prompt engineering, and entrepreneurial rhetoric creation. J. Bus. Ventur. Insights19, e00388. 10.1016/j.jbvi.2023.e00388

  • 159

    ShuK.BernardH. R.LiuH. (2019).” Studying fake news via network analysis: detection and mitigation,” in Emerging Research Challenges and Opportunities in Computational Social Network Analysis and Mining, eds N. Agarwal, N. Dokoohaki, and S. Tokdemir (Cham: Springer International Publishing), 4365. 10.1007/978-3-319-94105-9_3

  • 160

    SiegerinkB.PetL. A.RosendaalF. R.SchoonesJ. W. (2023). ChatGPT as an author of academic papers is wrong and highlights the concepts of accountability and contributorship. Nurse Educ. Pract.68, 103599. 10.1016/j.nepr.2023.103599

  • 161

    ŠlapetaJ. (2023). Are ChatGPT and other pretrained language models good parasitologists?Trends Parasitol.39, 314316. 10.1016/j.pt.2023.02.006

  • 162

    Stokel-WalkerC. (2022). AI bot ChatGPT writes smart essays — should academics worry? Nature. 10.1038/d41586-022-04397-7

  • 163

    Stokel-WalkerC. (2023). ChatGPT listed as author on research papers: many scientists disapprove. Nature613, 620621. 10.1038/d41586-023-00107-z

  • 164

    Stokel-WalkerC.Van NoordenR. (2023). What ChatGPT and generative AI mean for science. Nature614, 214216. 10.1038/d41586-023-00340-6

  • 165

    StrungaM.UrbanR.SurovkováJ.ThurzoA. (2023). Artificial intelligence systems assisting in the assessment of the course and retention of orthodontic treatment. Healthcare11, 683. 10.3390/healthcare11050683

  • 166

    SubramaniM.JaleelI.Krishna MohanS. (2023). Evaluating the performance of ChatGPT in medical physiology university examination of phase I MBBS. Adv. Physiol. Educ.47, 270271. 10.1152/advan.00036.2023

  • 167

    TaecharungrojV. (2023). “What can ChatGPT do?” Analyzing early reactions to the innovative AI Chatbot on twitter. Big Data Cogn. Comput.7, 35. 10.3390/bdcc7010035

  • 168

    TangG. (2023). Letter to editor: academic journals should clarify the proportion of NLP-generated content in papers. Account. Res. 1–2. 10.1080/08989621.2023.2180359

  • 169

    Teixeira da SilvaJ. A. (2023). Is ChatGPT a valid author?Nurse Educ. Pract.68, 103600. 10.1016/j.nepr.2023.103600

  • 170

    TemsahM. H.JamalA.Al-TawfiqJ. A. (2023). Reflection with ChatGPT about the excess death after the COVID-19 pandemic. New Microbes New Infect.52, 101103. 10.1016/j.nmni.2023.101103

  • 171

    TestoniA.GrecoC.BernardiR. (2022). Artificial intelligence models do not ground negation, humans do. GuessWhat?! Dialogues as a case study. Front. Big Data4, 736709. 10.3389/fdata.2021.736709

  • 172

    TeubnerT.FlathC. M.WeinhardtC.van der AalstW.HinzO. (2023). Welcome to the Era of ChatGPT et al.: the prospects of large language models. Bus. Inf. Syst. Eng.65, 95101. 10.1007/s12599-023-00795-x

  • 173

    The Lancet Digital Health (2023). ChatGPT: friend or foe?Lancet Digit. Health5, e102. 10.1016/S2589-7500(23)00023-7

  • 174

    ThomasS. P. (2023). Grappling with the implications of ChatGPT for researchers, clinicians, and educators. Issues Ment. Health Nurs.44, 141142. 10.1080/01612840.2023.2180982

  • 175

    ThornhillC.MeeusQ.PeperkampJ.BerendtB. (2019). A digital nudge to counter confirmation bias. Front. Big Data2, 11. 10.3389/fdata.2019.00011

  • 176

    ThorpH. H. (2023). ChatGPT is fun, but not an author. Science379, 313. 10.1126/science.adg7879

  • 177

    ThurzoA.StrungaM.UrbanR.SurovkováJ.AfrashtehfarK. I. (2023). Impact of artificial intelligence on dental education: a review and guide for curriculum update. Educ. Sci.13, 150. 10.3390/educsci13020150

  • 178

    TliliA.ShehataB.AdarkwahM. A.BozkurtA.HickeyD. T.HuangR.et al. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ.10, 15. 10.1186/s40561-023-00237-x

  • 179

    TongY.ZhangL. (2023). Discovering the next decade's synthetic biology research trends with ChatGPT. Synth. Syst. Biotechnol.8, 220223. 10.1016/j.synbio.2023.02.004

  • 180

    Tools such as ChatGPT threaten transparent science; here are our ground rules for their use (2023). Nature 613, 612. 10.1038/d41586-023-00191-1

  • 181

    TregoningJ. (2023). AI writing tools could hand scientists the ‘gift of time'. Nature. 10.1038/d41586-023-00528-w

  • 182

    TsigarisP.Teixeira da SilvaJ. A. (2023). Can ChatGPT be trusted to provide reliable estimates? Account. Res. 1–3. 10.1080/08989621.2023.2179919

  • 183

    van DisE. A. M.BollenJ.ZuidemaW.van RooijR.BocktingC. L. (2023). ChatGPT: five priorities for research. Nature614, 224226. 10.1038/d41586-023-00288-7

  • 184

    WangF. Y.MiaoQ.LiX.WangX.LinY. (2023). What does ChatGPT say: the DAO from algorithmic intelligence to linguistic intelligence. IEEE/CAA J. Autom. Sin.10, 575579. 10.1109/JAS.2023.123486

  • 185

    WangJ. (2023). ChatGPT: a test drive. Am. J. Phys.91, 255256. Available online at: https://pubs.aip.org/aapt/ajp/article/91/4/255/2878655/ChatGPT-A-test-drive

  • 186

    WangS. H. (2023). OpenAI — explain why some countries are excluded from ChatGPT. Nature615, 34. 10.1038/d41586-023-00553-9

  • 187

    WattersC. (2023). When criminals abuse the blockchain: establishing personal jurisdiction in a decentralised environment. Laws12, 33. 10.3390/laws12020033

  • 188

    Will ChatGPT transform healthcare? (2023). Nat. Med.29, 505506. 10.1038/s41591-023-02289-5

  • 189

    WuL.LiuH. (2018). “Tracing fake-news footprints: characterizing social media messages by how they propagate,” in Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (Marina Del Rey, CA). 10.1145/3159652.3159677

  • 190

    YadavaO. P. (2023). ChatGPT—a foe or an ally?Indian J. Thorac. Cardiovasc. Surg.39, 217221. 10.1007/s12055-023-01507-6

  • 191

    Yeo-TehN. S. L.TangB. L. (2023). Letter to editor: NLP systems such as ChatGPT cannot be listed as an author because these cannot fulfill widely adopted authorship criteria. Account. Res. 1–3. 10.1080/08989621.2023.2177160

  • 192

    ZhouJ.KeP.QiuX.HuangM.ZhangJ. (2023). ChatGPT: potential, prospects, and limitations. Front. Inf. Technol. Electron. Eng. 10.1631/FITEE.2300089

  • 193

    ZhuJ. J.JiangJ.YangM.RenZ. J. (2023). ChatGPT and environmental research. Environ. Sci. Technol. 10.1021/acs.est.3c01818

Summary

Keywords

ChatGPT, large language model (LLM), transformer, GPT, disruptive technology, artificial intelligence, AI

Citation

Watters C and Lemanski MK (2023) Universal skepticism of ChatGPT: a review of early literature on chat generative pre-trained transformer. Front. Big Data 6:1224976. doi: 10.3389/fdata.2023.1224976

Received

18 May 2023

Accepted

10 July 2023

Published

23 August 2023

Volume

6 - 2023

Edited by

José Valente De Oliveira, University of Algarve, Portugal

Reviewed by

Ziya Levent Gokaslan, Brown University, United States; Gerardo Adesso, University of Nottingham, United Kingdom

Updates

Copyright

*Correspondence: Casey Watters

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics