Skip to main content

GENERAL COMMENTARY article

Front. Digit. Health, 06 April 2023
Sec. Personalized Medicine

Commentary: The desire of medical students to integrate artificial intelligence into medical education: An opinion article

  • 1Science of Learning in Education Centre (SoLEC), Office of Education Research, National Institute of Education, Nanyang Technological University, Singapore, Singapore
  • 2Center for Advanced Brain Imaging (CABI), Georgia Institute of Technology, Atlanta, GA, United States
  • 3Centre for Research and Development in Learning (CRADLE), Nanyang Technological University, Singapore, Singapore

A Commentary on

The desire of medical students to integrate artificial intelligence into medical education: An opinion article

By Frommeyer TC, Fursmidt RM, Gilbert MM and Bett ES. (2022) Front. Digit. Health 4:831123. doi: 10.3389/fdgth.2022.831123

Introduction

Artificial intelligence (AI) is widely regarded as the suite of technologies that allow computers to perform tasks that resemble human abilities (e.g., speech and pattern recognition, rendering advice, generating new contents, etc.) (1, 2). It has proven its merits in the healthcare industry over the past decades and there are promising prospects that it will continue to do so in the years to come (3, 4). The recent opinion article by Frommeyer et al. (5) highlighted the value of AI for the medical profession with respect to precision medicine, drug discovery, disease diagnosis, and healthcare management, before making a “call to action” for the introduction of “guided seminars and courses on biostatistics, digital health literacy, and engineering technologies (5, p. 3).” In this commentary, we respond to this call for AI-oriented medical education by providing some internationally applicable ideas that would turn this proposal of Frommeyer's et al. (5) into reality.

AI knowledge that matters

Computational and programming procedures

Frommeyer et al. (5), and more recently Ravindran (4), gave some convincing arguments for the practical benefits of using AI tools, which operate on the basis of machine learning (ML), in the provision of certain healthcare services. These advantages relate to the enormous savings in time and human resources when conducting (i) radiological imaging (i.e., optimizing screening of potentially cancerous tissues during oncological examination), (ii) discovery of new drugs by using large datasets containing information about the chemical and molecular constituents of pre-existing drugs, and (iii) creating therapeutic and treatment regiments that are personalised according to the patient's needs [for details, see (46)]. Even though these AI tools generated lots of attention from healthcare professionals and researchers due to their high efficiency in delivering tangible outcomes; we believe that it would be more valuable if the focus turns to giving future medical and healthcare professionals a technical and holistic understanding of AI-based computational and programming processes directed to the provision of specific healthcare services. Accordingly, we do not see it as enough for medical schools to instruct their students on the basic and non-hands-on facts of AI methods only (i.e., telling medical students what AI is and what it can do without informing them about some details of the computational processes involved). In this sense, we do not want future medical practitioners and researchers to become mere consumers or “regurgitators” of AI-related content knowledge. Instead, we want such professionals to learn and think critically about how AI works, so they can use it properly in their careers with an informed mindset. For instance, if students were to be taught how AI facilitates the discovery of new drugs for treating cancer, they should understand where the data comes from, its qualitative and quantitative features, the general category of the AI algorithm (e.g., supervised, unsupervised, or reinforced learning) and the specific type of AI method to be used [e.g., decision trees, k-nearest neighbor (KNN), convolutional neural network (CNN)], how to partition the data and cross-validate the results by using training and testing datasets, and perhaps most importantly, how to analyze and interpret the results with respect to the research questions at hand. The successful final interpretation of computational findings would require the domain-specific knowledge of a medical/healthcare professional, as well as the technical expertise of a computer scientist or engineer who is skilled in AI/ML matters. As such, an interdisciplinary and collaborative effort is crucial – with the medical expert knowing the general pattern of findings that needs to be attained in order to answer the research questions on one hand – and the computing/engineering expert knowing the functions of the algorithms deployed, how the statistical outputs were generated, and what they mean on the other. With this in mind, we recommend an educational partnership between medical doctors and computer scientists/engineers in the form of serving as instructors in the medical-themed AI/ML program proposed herein. Ideally, the two parties should also communicate openly and pool their expertise together when developing new forms of AI/ML algorithms or tools for tackling specific medical research problems.

Taking a broader perspective, the acquisition of all these technical know-how mentioned above by medical students would be best served by them having some background programming knowledge and experience with coding AI/ML functions. However, owing to the fact that programming/coding skills are the mainstay of computer scientists and engineers, and not characteristic of the average medical student or healthcare professional, we advocate the use of ML software packages with intuitive graphical user interfaces that operate on the basis of drop-down menus and point-and-click functions. Currently, KNIME is prime example.1 It is an open-source software with a studio-like user interface that allows users to build, visualize, and deploy ML models through the construction of nodes-and-edges flowcharts. KNIME can be learned easily through online video courses (for instance, many video tutorials are freely available on YouTube), and we recommend medical instructors to consider KNIME as a serious candidate for their hands-on ML tutorials and workshops.

Caveats against the “Omnipotence” of AI

In addition to learning about the nuances of AI/ML operations, we would like to stress that these technical courses should be complemented by a firm conceptual understanding of what AI can and cannot do in providing healthcare services. For example, Kompa et al. (7) highlighted the importance of quantifying and communicating the probabilistic concept of uncertainty or stochasticity when using ML techniques for medical diagnostics and predictions. We see this approach as important because it enables healthcare professionals to become more perceptive when using AI and minimizes the risks of them making overestimated or unrealistic interpretations of findings or outputs derived from the use of AI/ML tools. We believe that the delivery of such knowledge to the medical/healthcare community is crucial because we do not want our proposed AI medical curriculum to be a mere copy of courses offered by a typical computer science or engineering department – that is, one that focuses primarily on programming and statistics without due considerations of the limitations of AI technology in healthcare and the human factors involved. Specifically, we do not want attendees of such an AI medical program to enter and leave with a “one-track mind,” thinking that AI is the “be-all” and “end-all” for human technological progress. This statement is of utmost importance given the presence of some overly optimistic present-day opinions favoring the use of AI tools in medical practice (4), biases that can potentially lead to flawed conclusions by the general public – for instance, that AI will eventually replace human radiologists and pathologists simply due to its higher accuracy in medical image analysis (4). We want to caution against the espousal of such views because other AI experts have opined that AI algorithms work best for medical imaging tasks that are specific and narrowly defined [e.g., identifying chest nodules and brain hemorrhages (3),]. This relates to the fact that many present-day AI/ML models are limited by an inability to extrapolate beyond the types of data used to validate them in the first place (8), meaning that a trained AI model works best with respect to dataset features that are congruent with or similar to those in the original training dataset, and that it cannot be applied flawlessly to perform predictions or inferences using datasets with features that are largely discrepant from those present in the training dataset. For instance, while AI can outperform human pathologists in the accurate detection of bone fractures and internal bleeding, it performs much poorer than humans in interpreting brain images from patients with acute neurological disorders (9). Evidence of the latter sort shows that although AI methods can serve as helpful tools, they are still far from replacing human critical thinking and inferential skills anytime soon. We will still need medical doctors around to interact with patients, to inform them about the nature of their illnesses in a congenial fashion, and to perform medical image-guided interventions (3). These are the “ingredients” of “human touch” that AI cannot replace.

“Black box” nature of conventional AI/ML models

Following this trend of thought, we also deem it important to inform medical students about the so-called “black box” nature of most AI/ML algorithms/models today, which relates to their inabilities to show human users how they computed their decisions and generated their outputs (10). Currently, there is a movement to overcome the impediment posed by non-explainable, black box-like AI/ML models through the development of advanced explainable AI (XAI) techniques [e.g., LIME and SHAP, see (10), for descriptions and formulas], which are set to offer enormous benefits to industries such as aviation and medical healthcare by making the decision-making process of the AI/ML algorithm more transparent to the user. This development, however, does not negate the fact that many of the most popular AI/ML models are “black boxes” by initial design (e.g., KNN, random forests, neural networks, support vector machines), and that the validity of their outputs are highly contingent on the quality of the input data (10). Notably, the input data can be affected by sampling or selection biases, which can create samples that are unrepresentative of the general population, culminating in findings that are not entirely dependable and ecologically valid. In other words, “black box” models, by not making the links between input and output data explicit to the user, requires the user to exert extra care in ensuring the quality and representativeness of the input data so as to ensure that the output findings carry merits for generalization and real-world application. Henceforth, we hope to see the inclusion of these vital facts concerning data selection/sampling and quality assurance in the medical curriculum proposed herein.

AI-related Ethics

Moreover, owing to the fact that AI applications for medical and healthcare use is an emerging rather than an established technology, the laws, regulations, and policies governing its creation and implementation are still in a state of ongoing maturation (1, 11). Nevertheless, medical students, as well as current healthcare professionals, should not stay ignorant about the ethical and legal concerns surrounding AI use and governance. As many AI algorithms or programs rely on a large pool of data for ML purposes, medical students ought to be taught about the ethical principles involved in AI-related data collection, storage, and analysis, so as to insure the protection of the basic rights of human patients or subjects. In this respect, we deem it as important for medical instructors to inform their students about the regional and international conventions and charters governing AI use (11), and to encourage them to apply such knowledge when embarking on their medical careers. Even though teaching ethics may seem “easy” when compared with mainstream medical topics, we still recommend some systematic teaching efforts to be in place in order to forestall the risks of AI misuse by future medical practitioners.

Closing statement

In conclusion, this article is meant to be a short commentary on some important considerations for the inclusion of AI content in medical schools' curriculum (see Table 1, for a summary of the main topics discussed above). We kept our ideas generic in nature in order to ensure freedom of choice for different medical colleges to work on the specifics of their implementation based on local academic norms. It is with our best wishes that these ideas, once implemented, will bring about higher quality education for future generations of medical students and healthcare professionals.

TABLE 1
www.frontiersin.org

Table 1. Key considerations for an AI-oriented medical curriculum.

Author contributions

JZ drafted the initial manuscript. NF assisted JZ with editing it. JZ revised the manuscript after reviews. Both authors made direct intellectual contributions to the work and approved it for publication.

Acknowledgments

The authors thank the reviewers for their insightful comments, which helped to improve the overall quality of the current article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Author disclaimer

Any ideas, opinions, and recommendations expressed in this article are those of the authors and do not reflect the views of National Institute of Education, Singapore and the main campus of Nanyang Technological University, Singapore, and Georgia Institute of Technology, Atlanta, GA, USA.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnote

1KNIME Analytics Platform. https://www.knime.com/

References

1. ACE. ACE overview for new and emerging health technologies: Artificial intelligence and its clinical applications. Technical Report HSO-M 03/2020. Agency of Care Effectiveness, Singapore (2020). Available at: https://www.ace-hta.gov.sg/docs/default-source/default-document-library/artificial-intelligence-and-its-clinical-applications.pdf.

2. Ravindran A. Will AI Dictate the Future? Singapore: Marshall Cavendish (2022a).

3. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. (2019) 6:94–8. doi: 10.7861/futurehosp.6-2-94

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Ravindran A. AI And healthcare. In: Ravindran A, editors. Will AI Dictate the Future?, Singapore: Marshall Cavendish (2022b). p. 133–52.

5. Frommeyer TC, Fursmidt RM, Gilbert MM, Bett ES. The desire of medical students to integrate artificial intelligence into medical education: an opinion article. Front digit health. (2022) 4:831123. doi: 10.3389/fdgth.2022.831123

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Lam C, Siefkas A, Zelin NS, Barnes G, Dellinger RP, Vincent JL, et al. Machine learning as a precision-medicine approach to prescribing COVID-19 pharmacotherapy with remdesivir or corticosteroids. Clin Ther. (2021) 43:871–85. doi: 10.1016/j.clinthera.2021.03.016

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Kompa B, Snoek J, Beam AL. Second opinion needed: communicating uncertainty in medical machine learning. NPJ Digit Med. (2021) 4:1–6. doi: 10.1038/s41746-020-00367-3

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Ye A. Real artificial intelligence: Understanding extrapolation vs generalization. Towards Data Science (2020). Available at: https://towardsdatascience.com/real-artificial-intelligence-understanding-extrapolation-vs-generalization-b8e8dcf5fd4b.

9. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. (2019) 25:44–56. doi: 10.1038/s41591-018-0300-7

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Zhong JY, Goh SK, Woo CJ. Heading toward trusted ATCO-AI systems: A literature review (Technical Report No. ATMRI-P1-ATP3_AI-NEURO), engrXiv, 1-31. Air Traffic Management Research Institute, Nanyang Technological University, Singapore (2021). doi: 10.31224/osf.io/t769b

11. WHO. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva, Switzerland: World Health Organization (2021). Available at: https://www.who.int/publications/i/item/9789240029200.

Keywords: artificial intelligence, machine learning - ML, technical skill building, medical education - clinical skills training, medical ethics

Citation: Zhong JY and Fischer NL (2023) Commentary: The desire of medical students to integrate artificial intelligence into medical education: An opinion article. Front. Digit. Health 5:1151390. doi: 10.3389/fdgth.2023.1151390

Received: 26 January 2023; Accepted: 23 March 2023;
Published: 6 April 2023.

Edited by:

James Chow, University of Toronto, Canada

Reviewed by:

Kirti Sundar Sahu, Canadian Red Cross, Canada
Karthik Seetharam, West Virginia State University, United States

© 2023 Zhong and Fischer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jimmy Y. Zhong jimmy.zhong@nie.edu.sg

Specialty Section: This article was submitted to Personalized Medicine, a section of the journal Frontiers in Digital Health

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.