<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="review-article" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Dent. Med</journal-id>
<journal-title>Frontiers in Dental Medicine</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Dent. Med</abbrev-journal-title>
<issn pub-type="epub">2673-4915</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fdmed.2023.1085251</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Dental Medicine</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Artificial intelligence in dentistry&#x2014;A review</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author"><name><surname>Ding</surname><given-names>Hao</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref><uri xlink:href="https://loop.frontiersin.org/people/1376995/overview"/></contrib>
<contrib contrib-type="author"><name><surname>Wu</surname><given-names>Jiamin</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref><uri xlink:href="https://loop.frontiersin.org/people/2123250/overview" /></contrib>
<contrib contrib-type="author"><name><surname>Zhao</surname><given-names>Wuyuan</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref><uri xlink:href="https://loop.frontiersin.org/people/2122970/overview" /></contrib>
<contrib contrib-type="author"><name><surname>Matinlinna</surname><given-names>Jukka P.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref><uri xlink:href="https://loop.frontiersin.org/people/2050189/overview" /></contrib>
<contrib contrib-type="author"><name><surname>Burrow</surname><given-names>Michael F.</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref></contrib>
<contrib contrib-type="author" corresp="yes"><name><surname>Tsoi</surname><given-names>James K. H.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="cor1">&#x002A;</xref><uri xlink:href="https://loop.frontiersin.org/people/982216/overview" /></contrib>
</contrib-group>
<aff id="aff1"><label><sup>1</sup></label><addr-line>Applied Oral Sciences &#x0026; Community Dental Care, Faculty of Dentistry</addr-line>, <institution>The University of Hong Kong</institution>, <addr-line>Pokfulam</addr-line>, <country>Hong Kong SAR, China</country></aff>
<aff id="aff2"><label><sup>2</sup></label><addr-line>Division of Dentistry, School of Medical Sciences</addr-line>, <institution>The University of Manchester</institution>, <addr-line>Manchester</addr-line>, <country>United Kingdom</country></aff>
<aff id="aff3"><label><sup>3</sup></label><addr-line>Restorative Dental Sciences, Faculty of Dentistry</addr-line>, <institution>The University of Hong Kong</institution>, <addr-line>Pokfulam</addr-line>, <country>Hong Kong SAR, China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p><bold>Edited by:</bold> Ziyad S. Haidar, University of the Andes, Chile</p></fn>
<fn fn-type="edited-by"><p><bold>Reviewed by:</bold> Artak Heboyan, Yerevan State Medical University, Armenia Nishant Kumar, Advanced Materials and Processes Research Institute (CSIR), India</p></fn>
<corresp id="cor1"><label>&#x002A;</label><bold>Correspondence:</bold> James K. H. Tsoi <email>jkhtsoi@hku.hk</email></corresp>
<fn fn-type="other" id="fn001"><p><bold>Specialty Section:</bold> This article was submitted to Dental Materials, a section of the journal Frontiers in Dental Medicine</p></fn>
</author-notes>
<pub-date pub-type="epub"><day>20</day><month>02</month><year>2023</year></pub-date>
<pub-date pub-type="collection"><year>2023</year></pub-date>
<volume>4</volume><elocation-id>1085251</elocation-id>
<history>
<date date-type="received"><day>31</day><month>10</month><year>2022</year></date>
<date date-type="accepted"><day>31</day><month>01</month><year>2023</year></date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2023 Ding, Wu, Zhao, Matinlinna, Burrow and Tsoi.</copyright-statement>
<copyright-year>2023</copyright-year><copyright-holder>Ding, Wu, Zhao, Matinlinna, Burrow and Tsoi</copyright-holder><license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Artificial Intelligence (AI) is the ability of machines to perform tasks that normally require human intelligence. AI is not a new term, the concept of AI can be dated back to 1950. However, it has not become a practical tool until two decades ago. Owing to the rapid development of three cornerstones of current AI technology&#x2014;big data (coming through digital devices), computational power, and AI algorithm&#x2014;in the past two decades, AI applications have been started to provide convenience to people&#x0027;s lives. In dentistry, AI has been adopted in all dental disciplines, i.e., operative dentistry, periodontics, orthodontics, oral and maxillofacial surgery, and prosthodontics. The majority of the AI applications in dentistry go to the diagnosis based on radiographic or optical images, while other tasks are not as applicable as image-based tasks mainly due to the constraints of data availability, data uniformity, and computational power for handling 3D data. Evidence-based dentistry (EBD) is regarded as the gold standard for the decision-making of dental professionals, while AI machine learning (ML) models learn from human expertise. ML can be seen as another valuable tool to assist dental professionals in multiple stages of clinical cases. This review narrated the history and classification of AI, summarised AI applications in dentistry, discussed the relationship between EBD and ML, and aimed to help dental professionals to understand AI as a tool better to assist their routine work with improved efficiency.</p>
</abstract>
<kwd-group>
<kwd>artficial intelligence (AI)</kwd>
<kwd>machine learning</kwd>
<kwd>neural network</kwd>
<kwd>dentistry</kwd>
<kwd>evidence-based dentistry</kwd>
</kwd-group><contract-num rid="cn001">17120220</contract-num><contract-num rid="cn002">MHKJFS/075/20</contract-num><contract-num rid="cn003">&#x00A0;</contract-num><contract-sponsor id="cn001">General Research Fund</contract-sponsor><contract-sponsor id="cn002">Research Grants Council of Hong Kong and the Innovation and Technology Fund</contract-sponsor><contract-sponsor id="cn003">Hong Kong Special Administrative Region Government, China</contract-sponsor><counts>
<fig-count count="2"/>
<table-count count="6"/><equation-count count="1"/><ref-count count="98"/><page-count count="0"/><word-count count="0"/></counts>
</article-meta>
</front>
<body><sec id="s1" sec-type="intro"><label>1.</label><title>Introduction</title>
<p>The fourth industrial revolution opens a new digital era, of which one of the most important contributions is Artificial Intelligence (AI). With more and more electronic devices assisting people&#x0027;s life comprehensively, the data recorded by those devices made it possible to easily use and analyse the data coming from those electronic devices by AI. AI is blooming and expanding rapidly in all sectors. It can learn from human expertise and undertake works typically requiring human intelligence. One of its definitions (<xref ref-type="bibr" rid="B1">1</xref>) is &#x201C;<italic>the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision making, and translation between languages</italic>&#x201D;.</p>
<p>AI has been adopted in many fields of industry, such as robots, automobiles, smart city, and financial analysis, <italic>etc</italic>. It has also been used in medicine and dentistry, for example, medical and dental imaging diagnostics, decision support, precision and digital medicine, drug discovery, wearable technology, hospital monitoring, robotic and virtual assistants. In many cases, AI can be regarded as a valuable tool to help dentists and clinicians reduce their workload. Besides diagnosing diseases using a single information source directed at a specified disease, AI can learn from multiple information sources (multi-modal data) to diagnose beyond human capabilities. For example, fundus photographs with other medical data such as age, gender, BMI, smoking habits, blood pressure, and the likelihood of diabetes has been used to predict heart disease (<xref ref-type="bibr" rid="B2">2</xref>). Thus, the AI can discover not only eye diseases such as diabetic retinopathy from fundus photography, but also heart disease. It looks like image-based analysis using AI is sound and successful. All these rely on the rapid development (as an output) of computing capacity (hardware), algorithmic research (software), and large database (input data). Given these, there are great potentials to use AI in the dental and medical field.</p>
<p>Many studies on AI applications in dentistry are undergoing or even have been put into practise in the aspects such as diagnosis, decision-making, treatment planning, prediction of treatment outcome, and disease prognosis. Many reviews regarding dental AI (<xref ref-type="bibr" rid="B3">3</xref>&#x2013;<xref ref-type="bibr" rid="B8">8</xref>) have been published, while this review aims to narrate the development of AI from incipient stages to present, describe the classifications of AI, summarise the current advances of AI research in dentistry, and discuss the relationship between Evidence-based dentistry (EBD) and AI. The limitations of current AI development in dentistry are also discussed.</p>
</sec>
<sec id="s2"><label>2.</label><title>Artificial intelligence</title>
<sec id="s2a"><label>2.1.</label><title>History of AI</title>
<p>Artificial intelligence is not a new term. Alan Turing wrote in his paper &#x201C;Computing Machinery and Intelligence&#x201D; (<xref ref-type="bibr" rid="B9">9</xref>) in the 1950 issue of <italic>Mind</italic>:</p><disp-quote>
<p><italic>&#x201C;I believe that at the end of the century (20th), the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.&#x201D;</italic></p></disp-quote>
<p>Back then, there was no term to interpret AI; Turing described AI as &#x201C;machines thinking&#x201D;. He mathematically investigated the feasibility of AI and explored how to construct intelligent machines and assess machine intelligence. He proposed that humans solve problems and make decisions by utilising available information and inference, machines also can do the same thing.</p>
<p>In the paper (<xref ref-type="bibr" rid="B9">9</xref>), Turing proposed setting a test as to whether a machine can achieve human-level intelligence. This test is known as the Turing Test. It lies on the following lines: Assuming a human evaluator could distinguish natural language communications between a human test taker and a machine. It is given that a human evaluator knows that the conversation is between a human and a machine, and the human evaluator, human test taker and machine are separated from one another. The conversation between the human test taker and the machine is limited to plain text, i.e., keyboard input, instead of speech. This is to make the test only focus on the machine&#x0027;s ability to answer the questions logically instead of testing its speech interpretation ability. If the human evaluator cannot distinguish the human test taker and the machine, the machine can be viewed as having passed the Turing Test, and such a machine is said to have &#x201C;machine intelligence&#x201D;.</p>
<p>Later, in 1955, the term AI was first proposed in a 2-month workshop: <italic>Dartmouth Summer Research Project on Artificial Intelligence</italic> (<xref ref-type="bibr" rid="B10">10</xref>) led by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. However, the concept was only on paper. Certain restrictions stopped researchers from developing real AI machines in the 1950s. Firstly, computers before 1949 lacked a fundamental prerequisite for AI tasks: there was no storage function, which means the codes could not be stored, they could only be executed. Secondly, computers were costly at that time. Lastly, funding sources had conservative attitudes towards this new field back then (<xref ref-type="bibr" rid="B11">11</xref>).</p>
<p>From 1957 to 1974, the AI field was fast-growing because of the growth of computer power, its accessibility, and AI algorithms. Examples include ELIZA, a computer program that could interpret spoken language and solve problems <italic>via</italic> text (<xref ref-type="bibr" rid="B12">12</xref>). Two &#x201C;AI Winters&#x201D; arrived after the first wave of development due to insufficient practical applications and research funding reduction in the mid-1970s and late 1980s (<xref ref-type="bibr" rid="B13">13</xref>). However, AI had its breakthrough between the two periods with very few developments. In the 1980s, it developed through two paths: machine learning (ML) and expert systems. They are two opposite approaches to AI considering their theory. ML allows computers to learn by experience (<xref ref-type="bibr" rid="B14">14</xref>); expert systems, on the contrary, simulate the decision-making process of human experts (<xref ref-type="bibr" rid="B15">15</xref>). In other words, ML finds the solution by learning and summarizing the experience by itself, while expert systems need human experts to input all possible situations and solutions in advance. Expert systems have largely been used in industry since then. The example includes R1 (Xcon) program, an expert system with around 2,500 rules for assisting components selection for computer assembly was developed (<xref ref-type="bibr" rid="B16">16</xref>) and used by DEC, a computer manufacturer.</p>
<p>Two important time points in computer vision are 2012 and 2017. In 2012, a graphics processing unit (GPU)-implemented deep learning (DL) network with eight layers was developed (<xref ref-type="bibr" rid="B17">17</xref>), The work won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) and achieved a classification top-5 error of 15.3&#x0025;. The error rate was more than 10.8&#x0025; lower than the runner-up. In 2017, SE-NET further lowered the top-5 error to 2.25&#x0025;, surpassing the human top-5 error (5.1&#x0025;) (<xref ref-type="bibr" rid="B18">18</xref>).</p>
<p>Later famous AI examples include Deep Blue&#x2014;a chess-playing expert system, which defeated chess champion of the time Gary Kasparov in 1997 (<xref ref-type="bibr" rid="B19">19</xref>); 20 years later in 2017, Google&#x0027;s AlphaGo, a DL program, defeated the world No. 1 ranked player Jie Ke in a Go match (<xref ref-type="bibr" rid="B20">20</xref>); recently in late 2022, OpenAI launched ChatGPT (Chat Generative Pre-trained Transformer), it is a text-generation model that can generate human-like responses based on text input, the model received extensive discussion since its launch (<xref ref-type="bibr" rid="B21">21</xref>). These examples used different AI approaches to operate.</p>
</sec>
<sec id="s2b"><label>2.2.</label><title>Classification of AI</title>
<p>There are many approaches whereby AI can be achieved; different types of AI can achieve different tasks, and researchers have created different AI classification methods.</p>
<p>AI is a generic term for all non-human intelligence. As <xref ref-type="fig" rid="F1">Figure&#x00A0;1</xref> shows, AI can be further classified as weak AI and strong AI. Weak AI, also called narrow AI, uses a program trained to solve single or specific tasks. The AI of today is mostly weak AI. Examples include reinforcement learning, e.g., AlphaGo, and automated manipulation robots; natural language processing, e.g., Google translation, and Amazon chat robot; computer vision, e.g., Tesla Autopilot, and face recognition; data mining, e.g., market customer analysis and personalised content recommendation on social media (<xref ref-type="bibr" rid="B22">22</xref>). Strong AI refers to the ability and intelligence of AI equalling that of humans&#x2014;it has its own awareness and behaviour as flexible as humans (<xref ref-type="bibr" rid="B23">23</xref>). Strong AI aims to create a multi-task algorithm to make decisions in multiple fields. Research on strong AI has to be very cautious as there might be ethical issues, and it could be dangerous. Thus, there are no strong AI applications up to now.</p>
<fig id="F1" position="float"><label>Figure 1</label>
<caption><p>Schematic diagram of the relationship between AI, strong AI, weak AI, expert-based systems, machine learning, deep learning and neural network (NN).</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="fdmed-04-1085251-g001.tif"/>
</fig>
<p>ML and expert systems are two different subgroups of weak AI. As shown in <xref ref-type="table" rid="T1">Table&#x00A0;1</xref>, ML can be further classified as supervised, semi-supervised and unsupervised learning based on the theory of the methods. Supervised learning uses labelled datasets for training, and these labelled datasets are the &#x201C;supervisor&#x201D; of the algorithm. The algorithm learns from the labelled input, and extracts and identifies the common features of the labelled input to make predictions about unlabelled input (<xref ref-type="bibr" rid="B24">24</xref>). Examples of supervised learning includes k-nearest neighbors, logistic regression, random forest, and support-vector machine (<xref ref-type="bibr" rid="B25">25</xref>). Unsupervised learning, on the contrary, works on its own to find the various features of unlabelled data (<xref ref-type="bibr" rid="B26">26</xref>). Semi-supervised learning lies between those two, which utilises a small amount labelled data together with a large amount of unlabelled data during training (<xref ref-type="bibr" rid="B27">27</xref>). Recently, a new method called weakly-supervised learning became increasingly popular in the AI field to alleviate labelling costs. In particular, the object segmentation task only uses image-level labels (i.e., only knowing what objects are in the images) instead of object boundary or location information for training (<xref ref-type="bibr" rid="B28">28</xref>).</p>
<table-wrap id="T1" position="float"><label>Table 1</label>
<caption><p>A comparison of supervised learning, semi-supervised learning, and unsupervised learning.</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th valign="top" align="left">Items</th>
<th valign="top" align="center">Supervised learning</th>
<th valign="top" align="center">Semi-supervised learning</th>
<th valign="top" align="center">Unsupervised learning</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Input type</td>
<td valign="top" align="left">Labelled data</td>
<td valign="top" align="left">A mixture of labelled and unlabelled data</td>
<td valign="top" align="left">Unlabelled data</td>
</tr>
<tr>
<td valign="top" align="left">Accuracy</td>
<td valign="top" align="left">High</td>
<td valign="top" align="left">Mid</td>
<td valign="top" align="left">Low</td>
</tr>
<tr>
<td valign="top" align="left">Complexity of the algorithm</td>
<td valign="top" align="left">Low</td>
<td valign="top" align="left">Mid</td>
<td valign="top" align="left">High</td>
</tr>
<tr>
<td valign="top" align="left">Types of algorithm</td>
<td valign="top" align="left">Regression and classification</td>
<td valign="top" align="left">Regression, classification, clustering, and association</td>
<td valign="top" align="left">Clustering and association</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Deep learning is currently a very prominent research area and forms a subset of ML. It can involve both supervised and unsupervised learning. As <xref ref-type="fig" rid="F2">Figure&#x00A0;2</xref> shows, &#x201C;deep&#x201D; represents an artificial &#x201C;neural network&#x201D; consisting of a minimum of three nodal layers&#x2014;input, multiple &#x201C;hidden&#x201D;, and output layers such that each layer consists of various numbers of interconnected nodes (artificial neurons) whereas each node <italic>x</italic> has an associated weight (<italic>w<sub>i</sub></italic>) and biased threshold (<italic>t</italic>) from <italic>m</italic> decisive factors, given by its own (simplified) linear regression model . The weight is assigned when there is an input of the node. If <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="IM1"><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>m</mml:mi></mml:msubsup><mml:mrow><mml:msub><mml:mi>w</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>+</mml:mo><mml:mi>t</mml:mi><mml:mo>&#x2265;</mml:mo><mml:mn>0</mml:mn></mml:math></inline-formula>, then the output&#x2009;&#x003D;&#x2009;1, meaning the data is passed to another node in another layer. The process of passing data from one layer to the next defines the neural network as a feedforward network, similar to a decision tree model.</p>
<fig id="F2" position="float"><label>Figure 2</label>
<caption><p>Schematic diagram of deep learning.</p></caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="fdmed-04-1085251-g002.tif"/>
</fig>
<p>As mentioned above, a deep neural network can extract features from the imported data, which does not require human intervention. Instead, it can learn those features from large datasets. On the other hand, expert systems require human intervention to learn, which indeed tuning the <italic>w<sub>i</sub></italic> and <italic>t</italic> manually. So, less data is required.</p>
<p>Neural networks (NNs) are biologically inspired networks that can be regarded as the pillars of deep learning algorithms. There are different variations of NNs, among which the most important types of neural networks are artificial neural networks (ANNs), convolution neural networks (CNNs), and generative adversarial networks (GANs).</p>
<sec id="s2b1"><label>2.2.1.</label><title>ANN</title>
<p>ANN comprises a group of neurons and layers, as illustrated in <xref ref-type="fig" rid="F2">Figure&#x00A0;2</xref>. As mentioned above, this model is a basic model for deep learning, consisting of a minimum of three layers. The inputs are processed only in the forward direction. Input neurons extract features of input data from the input layer and send data to hidden layers, and the data goes through all the hidden layers successively. Finally, the results are summarised and shown in the output layer. All the hidden layers in ANN can weigh the data received from previous layers and make adjustments before sending the data to the next layer. Each hidden layer acts as an input and output layer, allowing the ANN to understand more complex features (<xref ref-type="bibr" rid="B29">29</xref>).</p>
</sec>
<sec id="s2b2"><label>2.2.2.</label><title>CNN</title>
<p>CNN is a type of deep learning model mainly used for image recognition and generation. The mean difference between ANN and CNN is that CNN consists of convolution layers, in addition to the pooling layer and the fully connected layer in the hidden layers. Convolution layers are used to generate feature maps of input data using convolution kernels. The input image is folded by the kernels completely. It reduces the complexity of images because of the weight sharing by convolution. The pooling layer is usually followed by each group of convolution layers, which reduces the dimension of feature maps for further feature extraction. The fully connected layer is used after the convolution layer and pooling layer. As the name indicates, the fully connected layer connects to all activated neurons in the previous layer and transforms the 2D feature maps into 1D. 1D feature maps are then associated with nodes of categories for classification (<xref ref-type="bibr" rid="B30">30</xref>, <xref ref-type="bibr" rid="B31">31</xref>). By using the above-mentioned functional hidden layers, CNN showed higher efficiency and accuracy in image recognition compared with ANN.</p>
</sec>
<sec id="s2b3"><label>2.2.3.</label><title>GAN</title>
<p>GAN is one kind of deep learning algorithm designed by Goodfellow et al<italic>.</italic> (<xref ref-type="bibr" rid="B32">32</xref>) in 2014. It is an unsupervised learning method designed to automatically discover patterns from the input data and generate new data with similar features or patterns compared with the input data. GAN consists of two neural networks: a generator and a discriminator. The ultimate goal for the generator is to generate data such that the discriminator cannot determine whether the data is generated by the generator or from the original input data. The ultimate goal for the discriminator is to distinguish the generator-generated data from the original input data as much as possible. The two networks compete with each other in GAN, and both networks improve themselves during the competition.</p>
<p>Since GAN was designed, the network has rapidly spread in AI applications. They are mainly applied to image-to-image translation and generating plausible photos of objects, scenes, and people (<xref ref-type="bibr" rid="B33">33</xref>, <xref ref-type="bibr" rid="B34">34</xref>). Wu et al<italic>.</italic> (<xref ref-type="bibr" rid="B35">35</xref>) proposed a new 3D-GAN framework in 2016 based on a traditional GAN network. 3D-GAN generates 3D objects from a given 3D space by combining recent advances in GAN and volumetric convolutional networks. Unlike a traditional GAN network, it can generate objects in 3D directly or from 2D images. It gives a broader range of possible applications in 3D data processing compared with its 2D form.</p>
</sec>
</sec>
</sec>
<sec id="s3"><label>3.</label><title>AI in dentistry</title>
<p>As in other industries, AI in dentistry has started to bloom in recent years. From a dental perspective, applications of AI can be classified into diagnosis, decision-making, treatment planning, and prediction of treatment outcomes. Among all the AI applications in dentistry, the most popular one is diagnosis. AI can make more accurate and efficient diagnoses, thus reducing dentists&#x0027; workload. On one hand, dentists are increasingly relying on computer programs for making decisions (<xref ref-type="bibr" rid="B36">36</xref>, <xref ref-type="bibr" rid="B37">37</xref>). On the other hand, computer programs for dental use are becoming more and more intelligent, accurate, and reliable. Research on AI has spread over all fields in dentistry.</p>
<p>Although a large amount of journal articles regarding dental AI have been published, it is still difficult to compare between articles in terms of study design, data allocation (i.e., training, test, and validation sets), and model performance (i.e., accuracy, sensitivity, specificity, F1, AUC &#x007B;Area Under [the receiver operating characteristic (ROC)] Curve&#x007D;, recall). Most articles failed to report the information mentioned above entirely. Thus, a minimum information about clinical AI modeling: the MI-CLAIM (Minimum Information about Clinical Artificial Intelligence Modeling) checklist has been advocated to bring similar levels of transparency and utility to the application of AI in medicine (<xref ref-type="bibr" rid="B38">38</xref>).</p>
<sec id="s3a"><label>3.1.</label><title>AI in operative dentistry</title>
<p>Traditionally, dentists diagnose caries by visual and tactile examination or by radiographic examination according to a detailed criterion. However, detecting early-stage lesions is challenging when deep fissures, tight interproximal contacts, and secondary lesions are present. Eventually, many lesions are detected only in the advanced stages of dental caries, leading to a more complicated treatment, i.e., dental crown, root canal therapy, or even implant. Although dental radiography (whether panoramic, periapical, or bitewing views) and explorer (or dental probe) have been widely used and regarded as highly reliable diagnostic tools detecting dental caries, much of the screening and final diagnosis tends to rely on dentists&#x0027; experience.</p>
<p>In operative dentistry, there has been research on the detection of dental caries, vertical root fractures, apical lesions, pulp space volumetric assessment, and evaluation of tooth wear (<xref ref-type="bibr" rid="B39">39</xref>&#x2013;<xref ref-type="bibr" rid="B44">44</xref>) (<xref ref-type="table" rid="T2">Table&#x00A0;2</xref>). In a two-dimensional (2D) radiograph, each pixel of the grayscale image has an intensity, i.e., brightness, which represents the density of the object. By learning from the above-mentioned characteristics, an AI algorithm can learn the pattern and give predictions to segment the tooth, detect caries, <italic>etc</italic>. For example, Lee et al<italic>.</italic> (<xref ref-type="bibr" rid="B45">45</xref>) developed a CNN algorithm to detect dental caries on periapical radiographs. K&#x00FC;hnisch et al<italic>.</italic> (<xref ref-type="bibr" rid="B46">46</xref>) proposed a CNN algorithm to detect caries on intraoral images. Schwendicke et al<italic>.</italic> (<xref ref-type="bibr" rid="B47">47</xref>) compared the cost-effectiveness of AI for proximal caries detection with dentists&#x0027; diagnosis; the results showed that AI was more effective and less costly.</p>
<table-wrap id="T2" position="float"><label>Table 2</label>
<caption><p>Examples of AI applications in operative dentistry.</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th valign="top" align="left">Study</th>
<th valign="top" align="center">Type of data</th>
<th valign="top" align="center">Type of algorithm</th>
<th valign="top" align="center">Size of dataset (training/testing)</th>
<th valign="top" align="center">Accuracy</th>
<th valign="top" align="center">Sensitivity</th>
<th valign="top" align="center">Specificity</th>
<th valign="top" align="center">AUC</th>
<th valign="top" align="center">Other performances</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Vertical root fracture detection (<xref ref-type="bibr" rid="B40">40</xref>)</td>
<td valign="top" align="left">Panoramic radiography</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">240/60</td>
<td valign="top" align="center"/>
<td valign="top" align="center">0.75</td>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
<td valign="top" align="center">Precision: 0.93;<break/>F1: 0.83</td>
</tr>
<tr>
<td valign="top" align="left">Apical lesion detection (<xref ref-type="bibr" rid="B42">42</xref>)</td>
<td valign="top" align="left">CBCT images</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">16/4</td>
<td valign="top" align="center"/>
<td valign="top" align="center">0.93</td>
<td valign="top" align="center">0.88</td>
<td valign="top" align="center"/>
<td valign="top" align="center">PPV: 0.87;<break/>NPV: 0.93</td>
</tr>
<tr>
<td valign="top" align="left">Tooth wear evaluation (<xref ref-type="bibr" rid="B43">43</xref>)</td>
<td valign="top" align="left">Patient&#x0027;s information and oral conditions, intraoral optical images</td>
<td valign="top" align="left">SVM, KNN</td>
<td valign="top" align="center">245 in total</td>
<td valign="top" align="center">SVM: 0.69<break/>KNN: 0.48</td>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">Dental caries detection (<xref ref-type="bibr" rid="B45">45</xref>)</td>
<td valign="top" align="left">Periapical radiography</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">2400/600</td>
<td valign="top" align="center">0.82&#x2013;0.89</td>
<td valign="top" align="center">0.81&#x2013;0.923</td>
<td valign="top" align="center">0.83&#x2013;0.94</td>
<td valign="top" align="center">0.845&#x2013;0.917</td>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">Dental caries detection (<xref ref-type="bibr" rid="B46">46</xref>)</td>
<td valign="top" align="left">Intraoral optical images</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">1891/479</td>
<td valign="top" align="center">92.5&#x0025;&#x2013;93.3&#x0025;</td>
<td valign="top" align="center">0.896&#x2013;0.957</td>
<td valign="top" align="center">0.815&#x2013;0.943</td>
<td valign="top" align="center">0.955&#x2013;0.964</td>
<td valign="top" align="center"/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="table-fn1"><p>AUC, Area under [the receiver operating characteristic (ROC)] Curve; CBCT, cone-beam computed tomography; CNN, convolutional neural network; KNN, K-Nearest neighbor; NPV, negative predictive value; PPV, positive predictive value; SVM, support-vector machine.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>Several studies mentioned above showed that AI has promising results in early lesion detection, with accuracy the same or even better compared with dentists. This achievement requires interdisciplinary cooperation between computer scientists and clinicians. The clinicians manually label the radiographic images with the location of caries while the computer scientists prepare the dataset and ML algorithm. Finally, clinicians and computer scientists jointly check and verify the accuracy and precision of the training results (<xref ref-type="bibr" rid="B48">48</xref>).</p>
</sec>
<sec id="s3b"><label>3.2.</label><title>AI in periodontics</title>
<p>Periodontitis is one of the most widespread diseases. It is a burden for billions of individuals and, if untreated, can lead to tooth mobility and even tooth loss (<xref ref-type="bibr" rid="B49">49</xref>). To prevent severe periodontitis, early detection and treatment are needed. In clinical practise, periodontal disease diagnosis is based on evaluating pocket probing depths and gingival recession. The Periodontal Screening Index (PSI) is frequently used to quantify clinical attachment loss. However, this clinical evaluation has low reliability: the screening for periodontal disease is still based on the experience of dentists, and they may miss localized periodontal tissue loss (<xref ref-type="bibr" rid="B50">50</xref>).</p>
<p>In periodontics, AI has been utilised to diagnose periodontitis and classify plausible periodontal disease types (<xref ref-type="bibr" rid="B51">51</xref>, <xref ref-type="bibr" rid="B52">52</xref>). In addition, Krois et al<italic>.</italic> (<xref ref-type="bibr" rid="B50">50</xref>) adopted CNN in the detection of periodontal bone loss (PBL) on panoramic radiographs. Lee et al<italic>.</italic> (<xref ref-type="bibr" rid="B53">53</xref>) evaluated the potential usefulness and accuracy of a proposed CNN algorithm to detect periodontally compromised teeth automatically. Yauney et al<italic>.</italic> (<xref ref-type="bibr" rid="B54">54</xref>) claimed that periodontal conditions could be examined by a CNN algorithm developed by their research group using systemic health-related data (<xref ref-type="table" rid="T3">Table&#x00A0;3</xref>).</p>
<table-wrap id="T3" position="float"><label>Table 3</label>
<caption><p>Examples of AI applications in periodontics.</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th valign="top" align="left">Study</th>
<th valign="top" align="center">Type of data</th>
<th valign="top" align="center">Type of algorithm</th>
<th valign="top" align="center">Size of dataset (training/testing)</th>
<th valign="top" align="center">Accuracy</th>
<th valign="top" align="center">Sensitivity</th>
<th valign="top" align="center">Specificity</th>
<th valign="top" align="center">AUC</th>
<th valign="top" align="center">Other performances</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Periodontal bone loss detection (<xref ref-type="bibr" rid="B50">50</xref>)</td>
<td valign="top" align="left">Panoramic radiography</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">1456/353</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center">0.89</td>
<td valign="top" align="center">F1: 0.78;<break/>PPV: 0.76;<break/>NPV: 0.85</td>
</tr>
<tr>
<td valign="top" align="left">Severity of chronic periodontitis prediction (<xref ref-type="bibr" rid="B51">51</xref>)</td>
<td valign="top" align="left">Bacterial category in subgingival biofilms, Patient&#x0027;s information and oral conditions</td>
<td valign="top" align="left">NN, RF, SVM, RLR</td>
<td valign="top" align="center">692/45</td>
<td valign="top" align="center">NN: 0.80&#x2013;0.93<break/>RF: 0.78&#x2013;0.93<break/>SVM: 0.78&#x2013;0.92<break/>RLR: 0.79&#x2013;0.92</td>
<td valign="top" align="center">NN: 0.67&#x2013;0.95;<break/>RF: 0.71&#x2013;0.96;<break/>SVM: 0.72&#x2013;0.97;<break/>RLR: 0.75&#x2013;0.97</td>
<td valign="top" align="center">NN: 0.79&#x2013;0.88;<break/>RF: 0.72&#x2013;0.83;<break/>SVM: 0.61&#x2013;0.82;<break/>RLR: 0.64&#x2013;0.81</td>
<td valign="top" align="center">NN: 0.82&#x2013;0.96;<break/>RF: 0.81&#x2013;0.96;<break/>SVM: 0.83&#x2013;0.96;<break/>RLR: 0.82&#x2013;0.97</td>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">Periodontally compromised teeth detection (<xref ref-type="bibr" rid="B53">53</xref>)</td>
<td valign="top" align="left">Periapical radiography</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">1392/348</td>
<td valign="top" align="center">0.734&#x2013;0.828</td>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
<td valign="top" align="center">0.734&#x2013;0.826</td>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">Periodontal condition examination (<xref ref-type="bibr" rid="B54">54</xref>)</td>
<td valign="top" align="left">Systemic health-related data, intraoral optical images</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">284 in total</td>
<td valign="top" align="center"/>
<td valign="top" align="center">0.429</td>
<td valign="top" align="center"/>
<td valign="top" align="center">0.677</td>
<td valign="top" align="center">Precision: 0.271</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="table-fn2"><p>AUC, Area under the ROC curve; CNN, convolutional neural network; NN, neural network; NPV, negative predictive value; PPV, positive predictive value; RF, random forest; RLR, regularized logistic regression; SVM, support-vector machine.</p></fn>
</table-wrap-foot>
</table-wrap>
</sec>
<sec id="s3c"><label>3.3.</label><title>AI in orthodontics</title>
<p>Orthodontic treatment planning is usually based on the experience and preference of the orthodontists. As every patient and orthodontist is unique, the treatment is decided mutually by both sides. Traditionally, it takes a lot of effort for orthodontists to diagnose malocclusion, as many variables need to be considered in the cephalometric analysis, such that it is difficult to determine the treatment plan and predict the treatment outcome (<xref ref-type="bibr" rid="B55">55</xref>). AI is an ideal tool for solving orthodontic problems. In orthodontics, AI has applications (<xref ref-type="table" rid="T4">Table&#x00A0;4</xref>) in treatment planning and prediction of treatment results, such as simulating the changes in the appearance of pre- and post-treatment facial photographs. The impact of orthodontic treatment, the skeletal patterns, and the anatomic landmarks in lateral cephalograms (<xref ref-type="bibr" rid="B67">67</xref>) can be clearly seen with the aid of AI algorithms, greatly assisting communication between patients and dentists.</p>
<table-wrap id="T4" position="float"><label>Table 4</label>
<caption><p>Examples of AI applications in orthodontics.</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="left"/>
<col align="left"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th valign="top" align="left">Study</th>
<th valign="top" align="center">Type of data</th>
<th valign="top" align="center">Type of algorithm</th>
<th valign="top" align="center">Size of dataset (training/testing)</th>
<th valign="top" align="center">Accuracy</th>
<th valign="top" align="center">Sensitivity</th>
<th valign="top" align="center">Specificity</th>
<th valign="top" align="center">AUC</th>
<th valign="top" align="center">Other performances</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Orthodontic treatment results prediction (<xref ref-type="bibr" rid="B56">56</xref>)</td>
<td valign="top" align="left">Facial 3D images</td>
<td valign="top" align="left">DL</td>
<td valign="top" align="center">137 in total</td>
<td valign="top" align="center">N/A</td>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="left"/>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">Diagnosis of the need for orthodontic treatment (<xref ref-type="bibr" rid="B57">57</xref>)</td>
<td valign="top" align="left">Orthodontics-related oral condition data</td>
<td valign="top" align="left">Bayesian network</td>
<td valign="top" align="center">800/200</td>
<td valign="top" align="center">0.93&#x2013;0.96</td>
<td valign="top" align="center">0.94&#x2013;0.96</td>
<td valign="top" align="left">0.94&#x2013;1</td>
<td valign="top" align="left">0.91</td>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">Tooth extraction determination in orthodontic treatments (<xref ref-type="bibr" rid="B58">58</xref>)</td>
<td valign="top" align="left">Orthodontics-related indices</td>
<td valign="top" align="left">ANN</td>
<td valign="top" align="center">180/20</td>
<td valign="top" align="center">0.8</td>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="left"/>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">Tooth extraction determination in orthodontic treatments (<xref ref-type="bibr" rid="B59">59</xref>)</td>
<td valign="top" align="left">Cephalometric variables, orthodontics-related indices</td>
<td valign="top" align="left">ANN</td>
<td valign="top" align="center">96/60</td>
<td valign="top" align="center">0.93</td>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="left"/>
<td valign="top" align="center">ICC: 0.97&#x2013;0.99</td>
</tr>
<tr>
<td valign="top" align="left">Cephalometric landmarks locating (<xref ref-type="bibr" rid="B60">60</xref>, <xref ref-type="bibr" rid="B61">61</xref>)</td>
<td valign="top" align="left">Lateral cephalometric radiography</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">1028/283</td>
<td valign="top" align="center">0.804&#x2013;0.962</td>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="left"/>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">Tooth landmark/axis detection (<xref ref-type="bibr" rid="B62">62</xref>)</td>
<td valign="top" align="left">intra-oral Intraoral optical images, CBCT images</td>
<td valign="top" align="left">NN</td>
<td valign="top" align="center">2219/865</td>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="left"/>
<td valign="top" align="center">Average errors: 0.37&#x2005;mm (landmark detection); 3.33&#x00B0; (axis detection)</td>
</tr>
<tr>
<td valign="top" align="left">Skeletal classification (<xref ref-type="bibr" rid="B63">63</xref>)</td>
<td valign="top" align="left">Lateral cephalometric radiography</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">5890 in total</td>
<td valign="top" align="center">0.8951&#x2013;0.964</td>
<td valign="top" align="center">0.8427&#x2013;0.9459</td>
<td valign="top" align="left">0.9213&#x2013;0.9729</td>
<td valign="top" align="left">0.889&#x2013;0.991</td>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">Tooth surgery/extraction determination in orthodontic treatments (<xref ref-type="bibr" rid="B64">64</xref>)</td>
<td valign="top" align="left">Lateral cephalometric radiography, orthodontics-related indices</td>
<td valign="top" align="left">ANN</td>
<td valign="top" align="center">204/112</td>
<td valign="top" align="center">0.91&#x2013;0.96</td>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="left"/>
<td valign="top" align="center">ICC: 0.97&#x2013;0.99</td>
</tr>
<tr>
<td valign="top" align="left">Tooth segmentation (<xref ref-type="bibr" rid="B65">65</xref>)</td>
<td valign="top" align="left">3D models from intraoral scanner</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">1600/400</td>
<td valign="top" align="center">0.980&#x2013;0.986</td>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="left"/>
<td valign="top" align="center">F1: 0.942</td>
</tr>
<tr>
<td valign="top" align="left">Tooth and alveolar bone segmentation (<xref ref-type="bibr" rid="B66">66</xref>)</td>
<td valign="top" align="left">CBCT images</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">3172/1359</td>
<td valign="top" align="center">Tooth: 0.915<break/>Alveolar bone: 0.93</td>
<td valign="top" align="center">Tooth: 0.921;<break/>Alveolar bone: 0.935</td>
<td valign="top" align="left"/>
<td valign="top" align="left"/>
<td valign="top" align="center"/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="table-fn3"><p>3D, three<italic>-</italic>dimensional; AUC, Area under the ROC curve; ANN, artificial neural network; CBCT, cone-beam computed tomography; CNN, convolutional neural network; DL, deep learning; NN, neural network; ICC, intraclass correlation coefficient.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>A Bayesian-based decision support system was developed by Thanathornwong (<xref ref-type="bibr" rid="B57">57</xref>) to diagnose the need for orthodontic treatment based on orthodontics-related data as input. Xie et al<italic>.</italic> (<xref ref-type="bibr" rid="B58">58</xref>) proposed an ANN model to evaluate whether extractions are needed from lateral cephalometric radiographs; A similar evaluation system was proposed by Jung et al<italic>.</italic> (<xref ref-type="bibr" rid="B59">59</xref>). Apart from the application in predicting the extractions needed for orthodontic purposes, AI has been adopted to locate cephalometric landmarks. Park et al<italic>.</italic> (<xref ref-type="bibr" rid="B60">60</xref>, <xref ref-type="bibr" rid="B61">61</xref>) demonstrated a DL algorithm for the automatically identifying cephalometric landmarks on radiographs with a high accuracy. Bulatova (<xref ref-type="bibr" rid="B68">68</xref>) et al<italic>.</italic> and Kunz et al<italic>.</italic> (<xref ref-type="bibr" rid="B69">69</xref>) developed similar AI algorithms, with accuracies comparable with human examiners in identifying those landmarks. An automatic system for skeletal classification using lateral cephalometric radiographs was proposed by Yu et al<italic>.</italic> (<xref ref-type="bibr" rid="B63">63</xref>).</p>
<p>Besides locating multiple cephalometric landmarks and classification, AI systems have been used in orthodontic treatment planning. Choi et al<italic>.</italic> (<xref ref-type="bibr" rid="B64">64</xref>) proposed an AI model to judge whether surgery is needed using lateral cephalometric radiographs. It looks like most of the orthodontic applications are on landmarking identification and treatment planning, which are tedious procedures for orthodontists. A basic task for orthodontic treatment planning is to segment and classify the teeth. AI has also been used for these purposes on multiple sources, such as radiographs and full-arch 3D digital optical scans (<xref ref-type="bibr" rid="B65">65</xref>, <xref ref-type="bibr" rid="B66">66</xref>). Cui et al<italic>.</italic> proposed several AI algorithms to automatically segment teeth on a digital teeth model scanned by a 3D intraoral scanner (<xref ref-type="bibr" rid="B65">65</xref>) and CBCT images (<xref ref-type="bibr" rid="B66">66</xref>, <xref ref-type="bibr" rid="B70">70</xref>). In addition to tooth segmentation, they also segmented alveolar bone, the efficiency exceeded the radiologists&#x0027; work (i.e., 500 times faster). The paper also claimed that the algorithm works well in challenging cases with variable dental abnormalities (<xref ref-type="bibr" rid="B66">66</xref>).</p>
</sec>
<sec id="s3d"><label>3.4.</label><title>AI in oral and maxillofacial pathology</title>
<p>Oral and Maxillofacial Pathology (OMFP) is a specialty for examining pathological conditions and diagnosing diseases in the oral and maxillofacial region. The most severe type of OMFP is oral cancer. Statistics from the World Health Organization (WHO) show that every year there are over 657,000 patients diagnosed with oral cancer globally, among which there are more than 330,000 deaths (<xref ref-type="bibr" rid="B71">71</xref>). In OMFP, as shown in <xref ref-type="table" rid="T5">Table&#x00A0;5</xref>, AI has been researched mostly for tumour and cancer detection based on radiographic, microscopic and ultrasonographic images. Besides, abnormal locations can also be detected from radiographs by AI (<xref ref-type="bibr" rid="B72">72</xref>), such as nerves in the oral cavity, interdigitated tongue muscles, and parotid and salivary glands. CNN algorithms were demonstrated to be a suitable tool for the automatically detecting cancers (<xref ref-type="bibr" rid="B73">73</xref>, <xref ref-type="bibr" rid="B78">78</xref>). It is worth mentioning that AI also plays a role in managing cleft lip and palate in risk prediction, diagnosis, pre-surgical orthopaedics, speech assessment, and surgery (<xref ref-type="bibr" rid="B79">79</xref>).</p>
<table-wrap id="T5" position="float"><label>Table 5</label>
<caption><p>Examples of AI applications in oral and maxillofacial surgery.</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th valign="top" align="left">Study</th>
<th valign="top" align="center">Type of data</th>
<th valign="top" align="center">Type of algorithm</th>
<th valign="top" align="center">Size of dataset (training/testing)</th>
<th valign="top" align="center">Accuracy</th>
<th valign="top" align="center">Sensitivity</th>
<th valign="top" align="center">Specificity</th>
<th valign="top" align="center">AUC</th>
<th valign="top" align="center">Other performances</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Mandibular third molar and IAN positional relationship detection (<xref ref-type="bibr" rid="B72">72</xref>)</td>
<td valign="top" align="left">Panoramic radiography</td>
<td valign="top" align="left">CNN (ResNet-50)</td>
<td valign="top" align="left">571 in total</td>
<td valign="top" align="center">0.7232&#x2013;0.8065</td>
<td valign="top" align="center">0.8462&#x2013;0.8667</td>
<td valign="top" align="center">0.5532&#x2013;0.75</td>
<td valign="top" align="center">0.66&#x2013;0.83</td>
<td valign="top" align="center">Precision: 0.62&#x2013;0.83; F1: 0.61&#x2013;0.73</td>
</tr>
<tr>
<td valign="top" align="left">OSCC diagnosis (<xref ref-type="bibr" rid="B73">73</xref>)</td>
<td valign="top" align="left">Confocal laser endomicroscopy images</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">116 video sequences</td>
<td valign="top" align="center">0.883</td>
<td valign="top" align="center">0.866</td>
<td valign="top" align="center">0.9</td>
<td valign="top" align="center">0.96</td>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">OPMDs and OSCC diagnosis (<xref ref-type="bibr" rid="B74">74</xref>)</td>
<td valign="top" align="left">Intraoral optical images</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">980 in total</td>
<td valign="top" align="center"/>
<td valign="top" align="center">0.73&#x2013;0.99</td>
<td valign="top" align="center">0.83&#x2013;0.99</td>
<td valign="top" align="center">0.71&#x2013;1</td>
<td valign="top" align="center">Precision: 0.63&#x2013;0.98; F1: 0.68&#x2013;0.98</td>
</tr>
<tr>
<td valign="top" align="left">OPMDs diagnosis (<xref ref-type="bibr" rid="B75">75</xref>)</td>
<td valign="top" align="left">OCT Images</td>
<td valign="top" align="left">ANN, SVM</td>
<td valign="top" align="left">128/271 sets</td>
<td valign="top" align="center">0.52&#x2013;0.84</td>
<td valign="top" align="center">0.83&#x2013;0.93</td>
<td valign="top" align="center">0.69&#x2013;0.82</td>
<td valign="top" align="center"/>
<td valign="top" align="center">PPV: 0.51&#x2013;0.95; NPV: 0.76&#x2013;0.96</td>
</tr>
<tr>
<td valign="top" align="left">OPMDs diagnosis (<xref ref-type="bibr" rid="B76">76</xref>)</td>
<td valign="top" align="left">OCT Images</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">6/15 sets</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">1</td>
<td valign="top" align="center">0.7</td>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
</tr>
<tr>
<td valign="top" align="left">Ameloblastoma and KCOT diagnosis (<xref ref-type="bibr" rid="B77">77</xref>)</td>
<td valign="top" align="left">Panoramic radiography</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">400/100</td>
<td valign="top" align="center">0.83</td>
<td valign="top" align="center">0.818</td>
<td valign="top" align="center">0.833</td>
<td valign="top" align="center">0.88</td>
<td valign="top" align="center">Diagnostic time: 38 s</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="table-fn4"><p>AUC, Area under the ROC curve; CNN, convolutional neural network; IAN, inferior alveolar nerve; KCOT, keratocystic odontogenic tumour; NPV, negative <italic>p</italic>redictive value; OCT, optical coherence tomography; OPMD, oral potentially malignant disorder; OSCC: oral squamous cell carcinoma; PPV, positive predictive value.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>Early detection and diagnosis of various mucosal lesions are essential to classify benign or malignant. Surgery resection is required for malignant lesions. However, some of the lesions behave similarly in appearance, thus requiring the diagnosis by biopsy slides and radiographs. Pathologists diagnose disease by observing the morphology of stained specimens on glass slides using microscopic (<xref ref-type="bibr" rid="B80">80</xref>). It is tedious work that requires much of effort for pathologists. Of all the biopsies that need to be examined, only around 20&#x0025; of them are found to be malignancies. Thus, AI can be a suitable tool for aiding pathologists in this task.</p>
<p>Warin et al<italic>.</italic> (<xref ref-type="bibr" rid="B74">74</xref>) used a CNN approach to detect oral potentially malignant disorders (OPMDs) and oral squamous cell carcinoma (OSCC) in intraoral optical images. In addition to intraoral optical images, OCT has been used in identify benign and malignant lesions in the oral mucosa. James et al<italic>.</italic> (<xref ref-type="bibr" rid="B75">75</xref>) used ANN and SVM models to distinguish malignant and dysplastic oral lesions. Heidari et al<italic>.</italic> (<xref ref-type="bibr" rid="B76">76</xref>) used a CNN network, AlexNet (<xref ref-type="bibr" rid="B17">17</xref>), to distinguish normal and abnormal head and neck mucosa. Abureville et al<italic>.</italic> (<xref ref-type="bibr" rid="B73">73</xref>) used a CNN algorithm to automatically diagnose oral squamous cell carcinoma (SCC) from confocal laser endomicroscopy images; the study showed that the CNN algorithm used in the study was especially suitable for early diagnosis of SCC. Poedjiastoeti et al<italic>.</italic> (<xref ref-type="bibr" rid="B77">77</xref>) also used a CNN algorithm to identify and distinguish ameloblastoma and keratocystic odontogenic tumour (KCOT). The two oral tumours with similar features in radiographic images. By comparing the computer-generated results with the biopsy results, the accuracy of the CNN algorithm was found to be 83&#x0025; and the diagnostic time 38&#x2005;s. These values were similar to those of oral and maxillofacial specialists.</p>
</sec>
<sec id="s3e"><label>3.5.</label><title>AI in prosthodontics</title>
<p>In prosthodontics, a typical treatment process to prepare a dental crown includes tooth preparation, impression taking, cast trimming, restoration design, fabrication, try-in, and cementation. The application of AI in prosthodontics mainly lies in the restoration design (<xref ref-type="table" rid="T6">Table&#x00A0;6</xref>). CAD/CAM has digitalised the design work in commercialized products, including CEREC, Sirona, 3Shape, <italic>etc</italic>. Although this has dramatically increased the efficiency of the design process by utilising a tooth library for crown design, it still cannot achieve a custom-made design for individual patients (<xref ref-type="bibr" rid="B81">81</xref>). With the development of AI, Hwang et al<italic>.</italic> (<xref ref-type="bibr" rid="B82">82</xref>) and Tian et al<italic>.</italic> (<xref ref-type="bibr" rid="B83">83</xref>) proposed novel approaches based on 2D-GAN models to generate a crown by learning from technicians&#x0027; designs. The training data was 2D depth maps converted from 3D tooth models. Ding (<xref ref-type="bibr" rid="B84">84</xref>) reported a 3D-DCGAN network in the crown generation, which utilised 3D data directly in the crown generation process, the morphology of generated crowns was similar compared with natural teeth. Integrating AI with CAD/CAM or 3D/4D printing can achieve a more desirable workflow with high efficiency (<xref ref-type="bibr" rid="B88">88</xref>). AI has also been used in shade matching (<xref ref-type="bibr" rid="B85">85</xref>) and debonding prediction of CAD/CAM restorations (<xref ref-type="bibr" rid="B86">86</xref>).</p>
<table-wrap id="T6" position="float"><label>Table 6</label>
<caption><p>Examples of AI applications in prosthodontics.</p></caption>
<table frame="hsides" rules="groups">
<colgroup>
<col align="left"/>
<col align="left"/>
<col align="left"/>
<col align="center"/>
<col align="center"/>
<col align="center"/>
<col align="left"/>
<col align="center"/>
<col align="left"/>
</colgroup>
<thead>
<tr>
<th valign="top" align="left">Study</th>
<th valign="top" align="center">Type of data</th>
<th valign="top" align="center">Type of algorithm</th>
<th valign="top" align="center">Size of dataset (training/testing)</th>
<th valign="top" align="center">Accuracy</th>
<th valign="top" align="center">Sensitivity</th>
<th valign="top" align="center">Specificity</th>
<th valign="top" align="center">AUC</th>
<th valign="top" align="center">Other performances</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Crown generation (<xref ref-type="bibr" rid="B82">82</xref>)</td>
<td valign="top" align="left">Intraoral scanner/depth map</td>
<td valign="top" align="left">GAN</td>
<td valign="top" align="center">3070/243 (Teeth)</td>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
</tr>
<tr>
<td valign="top" align="left">Crown generation (<xref ref-type="bibr" rid="B83">83</xref>)</td>
<td valign="top" align="left">Intraoral scanner/depth map</td>
<td valign="top" align="left">GAN</td>
<td valign="top" align="center">700/80 (Teeth)</td>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
</tr>
<tr>
<td valign="top" align="left">Crown generation (<xref ref-type="bibr" rid="B84">84</xref>)</td>
<td valign="top" align="left">Intraoral scanner</td>
<td valign="top" align="left">3D&#x2013;DCGAN</td>
<td valign="top" align="center">600/12</td>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
</tr>
<tr>
<td valign="top" align="left">Shade matching (<xref ref-type="bibr" rid="B85">85</xref>)</td>
<td valign="top" align="left">CIE LAB color space number</td>
<td valign="top" align="left">BPNN</td>
<td valign="top" align="center">39/4</td>
<td valign="top" align="center"/>
<td valign="top" align="center"/>
<td valign="top" align="left"/>
<td valign="top" align="center"/>
<td valign="top" align="left">The proposed method had a lower <italic>&#x0394;</italic>E compared with traditional visual shade matching.</td>
</tr>
<tr>
<td valign="top" align="left">Resin composite crowns debonding prediction (<xref ref-type="bibr" rid="B86">86</xref>)</td>
<td valign="top" align="left">optical images of abutments</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">6480/2160</td>
<td valign="top" align="center">0.985</td>
<td valign="top" align="center">1</td>
<td valign="top" align="left"/>
<td valign="top" align="center">0.998</td>
<td valign="top" align="left">Precision: 0.97;<break/>F1: 0.985</td>
</tr>
<tr>
<td valign="top" align="left">Dental arch classification (<xref ref-type="bibr" rid="B87">87</xref>)</td>
<td valign="top" align="left">Intraoral optical images</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="center">1016/168</td>
<td valign="top" align="center">0.995&#x2013;0.997</td>
<td valign="top" align="center">1</td>
<td valign="top" align="left"/>
<td valign="top" align="center">0.98&#x2013;0.99</td>
<td valign="top" align="left">Precision: 0.25<break/>F1: 0.4</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="table-fn5"><p>AUC, Area under the ROC curve; BPNN, back-propagation neural network; CIE, international commission on illumination; CNN, convolutional neural network; GAN, generative adversarial network; 3D-DCGAN, 3-dimensional deep convolutional generative adversarial network.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>Apart from fixed prosthodontics, the design in removable prosthodontics is more challenging as more factors and variables need to be considered. No ML algorithm is available for the purpose of designing removable dentures while several expert (knowledge based) systems have been introduced (<xref ref-type="bibr" rid="B89">89</xref>&#x2013;<xref ref-type="bibr" rid="B91">91</xref>). Current ML algorithms are more focused on assisting the design process of removable dentures, e.g., classification of dental arches (<xref ref-type="bibr" rid="B87">87</xref>), and facial appearance prediction in edentulous patients (<xref ref-type="bibr" rid="B92">92</xref>).</p>
</sec>
</sec>
<sec id="s4" sec-type="discussion"><label>4.</label><title>Discussion</title>
<p>Given the success of AI, it has been proved that AI can learn beyond human expertise. In fact, the development of AI cannot be achieved without the development of computer technology (software), computing capacity (hardware), and large database (input data). ML tasks involving 3D models require high computational power to train the algorithm. Current computational power may still insufficient to work directly on 3D data to perform classification or regression tasks compared with well-studied 2D image and video-based tasks. Millions of point clouds or meshes in a 3D model cannot be loaded to GPU at once. Sampling and representations of a 3D model (i.e., depth map, voxels, point cloud, and mesh) are often used to reduce the computation burden, such that the details would be sacrificed during the transition. In addition to the massive amount of digitalised medical data used for training ML models, which did not exist previously, the development of wearable devices also contributes to the acquisition of medical big data. Thus, the evolution of AI applications is greatly dependent on the AI algorithm, computational power, and digitalised training data.</p>
<p>Evidence-Based Dentistry (EBD), a more specific branch of Evidence-Based Medicine (EBM), is defined as &#x201C;<italic>an approach to oral health care that requires the judicious integration of systematic assessments of clinically relevant scientific evidence, relating to the patient&#x0027;s oral and medical condition and history, with the dentist&#x0027;s clinical expertise and the patient&#x0027;s treatment needs and preferences</italic>&#x201D; (<xref ref-type="bibr" rid="B93">93</xref>). Both EBM and EBD are regarded as the gold standard for the decision-making of health professionals. While ML models learn from human expertise, this can be seen as another useful tool for health professionals in multiple stages of clinical cases.</p>
<p>On one hand, ML could assist clinicians in storing and analysing constantly updated medical knowledge and patient-related data. ML algorithms are adept at finding patterns in patients&#x0027; diagnostic data, improving current medical treatment, discovering new drugs, precision medicine, and minimising human error. EBD has a similar aim, but ML can finish it more quickly as it uses existing data, while EBD usually needs randomized controlled trials to achieve those aims. On the other hand, medical data are challenging to handle since the diagnosis is usually based on multiple sources. ML requires a large amount of data for training which may be subject to systematic bias or be inaccessible; these could influence the ultimate result. It is not easy to improve the precision of a ML model by only increasing the training data instead of increasing the quality of the data. Also, ML cannot account for the differing diagnoses by different clinicians using different data sources.</p>
<p>In addition, medical data are often stored within isolated, individualised, and limitedly interoperable systems due to concerns such as ethical problems, data protection, and organisational barriers. The research on federated learning (<xref ref-type="bibr" rid="B94">94</xref>) of ML is a potential way to solve data privacy protection problems. Besides, professional personnel are usually required to label dental and medical data. These limitations lead to the datasets lacking structure and insufficient, at least when compared with other AI fields (<xref ref-type="bibr" rid="B95">95</xref>). Few-shot learning has been studied to tackle this problem (<xref ref-type="bibr" rid="B96">96</xref>).</p>
<p>To use dental and medical data for ML training, one must be very careful with its complex, sensitive, and limited validation methods (<xref ref-type="bibr" rid="B97">97</xref>). Dental and medical data from electronic records are usually of low integrity. The data often lack of systematic allocation and is not at random, e.g., data from the hospital may have a risk of being overly sick; data collected from wearable devices may have a risk of being overly healthy. Furthermore, healthcare system level in different countries or regions is unbalanced. Data from one single country or region could possibly lead to the training result being precise but not accurate and cannot apply to countries with different healthcare system conditions. AI applications trained by such data will be biased (<xref ref-type="bibr" rid="B95">95</xref>). ML using such long-tailed data have been studied to minimise its influence (<xref ref-type="bibr" rid="B98">98</xref>). Besides, the outcomes of AI are often not readily applicable. The single output provided by most contemporary medical AI applications will only partially inform the required and complex decision-making of clinical applications. Unlike EBD, ML does not have a system to monitor the quality of the input medical data and the degree of bias. EBD has a more macroscopic awareness, and decisions are usually made based on several data sources to minimise bias. Due to the above-mentioned constraints, some clinicians have reserved their opinion on ML due to its &#x201C;black box&#x201D; mechanism, which the rationale for getting to the specific results cannot be explained. Although explainable AI has been studied for this purpose (<xref ref-type="bibr" rid="B99">99</xref>), EBD is straightforward and has a more transparent mechanism (<xref ref-type="bibr" rid="B100">100</xref>).</p>
<p>EBD and ML have their own advantages and disadvantages. ML is a new approach in the medical field to improve diagnosis and predict treatment outcomes by discovering patterns and associations amongst medical datasets. However, while current ML applications mainly rely on the same type of dataset, ML is capable of acquiring information from EBD, which uses different kinds of data for diagnosis. EBD can also benefit from the addition of ML in facilitating the discovery of the underlying connection between medical data and disease and in providing a better and individualised diagnosis. EBD and ML are complementary to serve clinicians better; clinicians can refer to both to maximise their advantage and apply them to medical practise.</p>
</sec>
<sec id="s5" sec-type="conclusions"><label>5.</label><title>Conclusion</title>
<p>New technologies are developed and adopted rapidly in the dental field. AI is among the most promising ones, with features such as high accuracy and efficiency if unbiased training data is used and an algorithm is properly trained. Dental practitioners can identify AI as a supplemental tool to reduce their workload and improve precision and accuracy in diagnosis, decision-making, treatment planning, prediction of treatment outcomes, and disease prognosis.</p>
</sec>
</body>
<back>
<sec id="s6"><title>Author contributions</title>
<p>HD: Methodology, Investigation, Visualization, Writing&#x2014;original draft. JW: Writing&#x2014;review &#x0026; editing. WZ: Writing&#x2014;review &#x0026; editing. JPM: Writing&#x2014;review &#x0026; editing. MFB: Writing&#x2014;review &#x0026; editing. JKHT: Conceptualization, Investigation, Writing&#x2014;review &#x0026; editing. All authors agree to be accountable for the content of the work. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec id="s7" sec-type="funding-information"><title>Funding</title>
<p>This study was submitted in partial fulfilment of the requirements for the PhD degree of the first author at the University of Hong Kong. This work was supported by the General Research Fund (grant no. 17120220) of Research Grants Council of Hong Kong and the Innovation and Technology Fund (MHKJFS/075/20) of Hong Kong Special Administrative Region Government, China.</p>
</sec>
<sec id="s8" sec-type="COI-statement"><title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="s9" sec-type="disclaimer"><title>Publisher&#x0027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list><title>References</title>
<ref id="B1"><label>1.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Stevenson</surname><given-names>A</given-names></name></person-group>. <source>Oxford Dictionary of English</source>. <publisher-loc>USA</publisher-loc>: <publisher-name>Oxford University Press</publisher-name> (<year>2010</year>).</citation></ref>
<ref id="B2"><label>2.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poplin</surname><given-names>R</given-names></name><name><surname>Varadarajan</surname><given-names>AV</given-names></name><name><surname>Blumer</surname><given-names>K</given-names></name><name><surname>Liu</surname><given-names>Y</given-names></name><name><surname>McConnell</surname><given-names>MV</given-names></name><name><surname>Corrado</surname><given-names>GS</given-names></name><etal/></person-group> <article-title>Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning</article-title>. <source>Nat Biomed Eng</source>. (<year>2018</year>) <volume>2</volume>(<issue>3</issue>):<fpage>158</fpage>&#x2013;<lpage>64</lpage>. <pub-id pub-id-type="doi">10.1038/s41551-018-0195-0</pub-id><pub-id pub-id-type="pmid">31015713</pub-id></citation></ref>
<ref id="B3"><label>3.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khanagar</surname><given-names>SB</given-names></name><name><surname>Al-ehaideb</surname><given-names>A</given-names></name><name><surname>Maganur</surname><given-names>PC</given-names></name><name><surname>Vishwanathaiah</surname><given-names>S</given-names></name><name><surname>Patil</surname><given-names>S</given-names></name><name><surname>Baeshen</surname><given-names>HA</given-names></name><etal/></person-group> <article-title>Developments, application, and performance of artificial intelligence in dentistry&#x2014;a systematic review</article-title>. <source>J Dent Sci</source>. (<year>2021</year>) <volume>16</volume>(<issue>1</issue>):<fpage>508</fpage>&#x2013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1016/j.jds.2020.06.019</pub-id><pub-id pub-id-type="pmid">33384840</pub-id></citation></ref>
<ref id="B4"><label>4.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khanagar</surname><given-names>SB</given-names></name><name><surname>Al-Ehaideb</surname><given-names>A</given-names></name><name><surname>Vishwanathaiah</surname><given-names>S</given-names></name><name><surname>Maganur</surname><given-names>PC</given-names></name><name><surname>Patil</surname><given-names>S</given-names></name><name><surname>Naik</surname><given-names>S</given-names></name><etal/></person-group> <article-title>Scope and performance of artificial intelligence technology in orthodontic diagnosis, treatment planning, and clinical decision-making&#x2014;a systematic review</article-title>. <source>J Dent Sci</source>. (<year>2021</year>) <volume>16</volume>(<issue>1</issue>):<fpage>482</fpage>&#x2013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1016/j.jds.2020.05.022</pub-id><pub-id pub-id-type="pmid">33384838</pub-id></citation></ref>
<ref id="B5"><label>5.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mahmood</surname><given-names>H</given-names></name><name><surname>Shaban</surname><given-names>M</given-names></name><name><surname>Indave</surname><given-names>BI</given-names></name><name><surname>Santos-Silva</surname><given-names>AR</given-names></name><name><surname>Rajpoot</surname><given-names>N</given-names></name><name><surname>Khurram</surname><given-names>SA</given-names></name></person-group>. <article-title>Use of artificial intelligence in diagnosis of head and neck precancerous and cancerous lesions: a systematic review</article-title>. <source>Oral Oncol</source>. (<year>2020</year>) <volume>110</volume>:<fpage>104885</fpage>. <pub-id pub-id-type="doi">10.1016/j.oraloncology.2020.104885</pub-id><pub-id pub-id-type="pmid">32674040</pub-id></citation></ref>
<ref id="B6"><label>6.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Farook</surname><given-names>TH</given-names></name><name><surname>Jamayet</surname><given-names>NB</given-names></name><name><surname>Abdullah</surname><given-names>JY</given-names></name><name><surname>Alam</surname><given-names>MK</given-names></name></person-group>. <article-title>Machine learning and intelligent diagnostics in dental and orofacial pain management: a systematic review</article-title>. <source>Pain Res Manag</source>. (<year>2021</year>) <volume>2021</volume>:<fpage>6659133</fpage>. <pub-id pub-id-type="doi">10.1155/2021/6659133</pub-id><pub-id pub-id-type="pmid">33986900</pub-id></citation></ref>
<ref id="B7"><label>7.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>AbuSalim</surname><given-names>S</given-names></name><name><surname>Zakaria</surname><given-names>N</given-names></name><name><surname>Islam</surname><given-names>MR</given-names></name><name><surname>Kumar</surname><given-names>G</given-names></name><name><surname>Mokhtar</surname><given-names>N</given-names></name><name><surname>Abdulkadir</surname><given-names>SJ</given-names></name></person-group>. <article-title>Analysis of deep learning techniques for dental informatics: a systematic literature review</article-title>. <source>Healthcare</source>. (<year>2022</year>) <volume>10</volume>(<issue>10</issue>):<fpage>1892</fpage>. <pub-id pub-id-type="doi">10.3390/healthcare10101892</pub-id><pub-id pub-id-type="pmid">36292339</pub-id></citation></ref>
<ref id="B8"><label>8.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mohammad-Rahimi</surname><given-names>H</given-names></name><name><surname>Motamedian</surname><given-names>SR</given-names></name><name><surname>Pirayesh</surname><given-names>Z</given-names></name><name><surname>Haiat</surname><given-names>A</given-names></name><name><surname>Zahedrozegar</surname><given-names>S</given-names></name><name><surname>Mahmoudinia</surname><given-names>E</given-names></name><etal/></person-group> <article-title>Deep learning in periodontology and oral implantology: a scoping review</article-title>. <source>J Periodont Res</source>. (<year>2022</year>) <volume>57</volume>(<issue>5</issue>):<fpage>942</fpage>&#x2013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1111/jre.13037</pub-id></citation></ref>
<ref id="B9"><label>9.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Turing</surname><given-names>AM</given-names></name><name><surname>Haugeland</surname><given-names>J</given-names></name></person-group>. <source>Computing machinery and intelligence</source>. <publisher-loc>MA</publisher-loc>: <publisher-name>MIT Press Cambridge</publisher-name> (<year>1950</year>).</citation></ref>
<ref id="B10"><label>10.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCarthy</surname><given-names>J</given-names></name><name><surname>Minsky</surname><given-names>M</given-names></name><name><surname>Rochester</surname><given-names>N</given-names></name><name><surname>Shannon</surname><given-names>CE</given-names></name></person-group>. <article-title>A proposal for the dartmouth summer research project on artificial intelligence</article-title>. <source>AI magazine</source>. (<year>2006</year>) 27(4):12&#x2013;14. <ext-link ext-link-type="uri" xlink:href="http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf">http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf</ext-link></citation></ref>
<ref id="B11"><label>11.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tatnall</surname><given-names>A</given-names></name></person-group>. <article-title>History of computers: hardware and software development</article-title>. In: <source>Encyclopedia of Life Support Systems (EOLSS), Developed under the Auspices of the UNESCO</source>. Paris, France: Eolss. (<year>2012</year>). <ext-link ext-link-type="uri" xlink:href="https://www.eolss.net">https://www.eolss.net</ext-link></citation></ref>
<ref id="B12"><label>12.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Weizenbaum</surname><given-names>J</given-names></name></person-group>. <article-title>ELIZA&#x2014;a computer program for the study of natural language communication between man and machine</article-title>. <source>Commun ACM</source>. (<year>1966</year>) <volume>9</volume>(<issue>1</issue>):<fpage>36</fpage>&#x2013;<lpage>45</lpage>. <pub-id pub-id-type="doi">10.1145/365153.365168</pub-id></citation></ref>
<ref id="B13"><label>13.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hendler</surname><given-names>J</given-names></name></person-group>. <article-title>Avoiding another AI winter</article-title>. <source>IEEE Intell Syst</source>. (<year>2008</year>) <volume>23</volume>(<issue>02</issue>):<fpage>2</fpage>&#x2013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1109/MIS.2008.20</pub-id></citation></ref>
<ref id="B14"><label>14.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schmidhuber</surname><given-names>J</given-names></name></person-group>. <article-title>Deep learning</article-title>. <source>Scholarpedia</source>. (<year>2015</year>) <volume>10</volume>(<issue>11</issue>):<fpage>32832</fpage>. <pub-id pub-id-type="doi">10.4249/scholarpedia.32832</pub-id></citation></ref>
<ref id="B15"><label>15.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liebowitz</surname><given-names>J</given-names></name></person-group>. <article-title>Expert systems: a short introduction</article-title>. <source>Eng Fract Mech</source>. (<year>1995</year>) <volume>50</volume>(<issue>5&#x2013;6</issue>):<fpage>601</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1016/0013-7944(94)E0047-K</pub-id></citation></ref>
<ref id="B16"><label>16.</label><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>McDermott</surname><given-names>JP</given-names></name></person-group>. <conf-name>RI: an expert in the computer systems domain</conf-name>. <conf-name>AAAI Conference on artificial intelligence</conf-name> (<year>1980</year>).</citation></ref>
<ref id="B17"><label>17.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krizhevsky</surname><given-names>A</given-names></name><name><surname>Sutskever</surname><given-names>I</given-names></name><name><surname>Hinton</surname><given-names>GE</given-names></name></person-group>. <article-title>Imagenet classification with deep convolutional neural networks</article-title>. <source>Commun ACM</source>. (<year>2017</year>) <volume>60</volume>(<issue>6</issue>):<fpage>84</fpage>&#x2013;<lpage>90</lpage>. <pub-id pub-id-type="doi">10.1145/3065386</pub-id></citation></ref>
<ref id="B18"><label>18.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Russakovsky</surname><given-names>O</given-names></name><name><surname>Deng</surname><given-names>J</given-names></name><name><surname>Su</surname><given-names>H</given-names></name><name><surname>Krause</surname><given-names>J</given-names></name><name><surname>Satheesh</surname><given-names>S</given-names></name><name><surname>Ma</surname><given-names>S</given-names></name><etal/></person-group> <article-title>Imagenet large scale visual recognition challenge</article-title>. <source>Int J Comput Vis</source>. (<year>2015</year>) <volume>115</volume>(<issue>3</issue>):<fpage>211</fpage>&#x2013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1007/s11263-015-0816-y</pub-id></citation></ref>
<ref id="B19"><label>19.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Campbell</surname><given-names>M</given-names></name><name><surname>Hoane Jr</surname><given-names>AJ</given-names></name><name><surname>Hsu</surname><given-names>F-h</given-names></name></person-group>. <article-title>Deep blue</article-title>. <source>Artif Intell</source>. (<year>2002</year>) <volume>134</volume>(<issue>1&#x2013;2</issue>):<fpage>57</fpage>&#x2013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1016/S0004-3702(01)00129-1</pub-id></citation></ref>
<ref id="B20"><label>20.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chao</surname><given-names>X</given-names></name><name><surname>Kou</surname><given-names>G</given-names></name><name><surname>Li</surname><given-names>T</given-names></name><name><surname>Peng</surname><given-names>Y</given-names></name></person-group>. <article-title>Jie ke versus AlphaGo: a ranking approach using decision making method for large-scale data with incomplete information</article-title>. <source>Eur J Oper Res</source>. (<year>2018</year>) <volume>265</volume>(<issue>1</issue>):<fpage>239</fpage>&#x2013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1016/j.ejor.2017.07.030</pub-id></citation></ref>
<ref id="B21"><label>21.</label><citation citation-type="journal"><collab>Open AI. Chat GPT</collab>. <article-title>Optimizing language, odels for dialogue</article-title>. Available at: <ext-link ext-link-type="uri" xlink:href="https://openai.com/blog/chatgpt/">https://openai.com/blog/chatgpt/</ext-link> (accessed on 7 February 2023).</citation></ref>
<ref id="B22"><label>22.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fang</surname><given-names>G</given-names></name><name><surname>Chow</surname><given-names>MC</given-names></name><name><surname>Ho</surname><given-names>JD</given-names></name><name><surname>He</surname><given-names>Z</given-names></name><name><surname>Wang</surname><given-names>K</given-names></name><name><surname>Ng</surname><given-names>T</given-names></name><etal/></person-group> <article-title>Soft robotic manipulator for intraoperative MRI-guided transoral laser microsurgery</article-title>. <source>Sci Robot</source>. (<year>2021</year>) <volume>6</volume>(<issue>57</issue>):<fpage>eabg5575</fpage>. <pub-id pub-id-type="doi">10.1126/scirobotics.abg5575</pub-id><pub-id pub-id-type="pmid">34408096</pub-id></citation></ref>
<ref id="B23"><label>23.</label><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Flowers</surname><given-names>JC</given-names></name></person-group>. <conf-name>Strong and weak AI: deweyan considerations</conf-name>. <conf-name>AAAI Spring symposium: towards conscious AI systems</conf-name> (<year>2019</year>).</citation></ref>
<ref id="B24"><label>24.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Hastie</surname><given-names>T</given-names></name><name><surname>Tibshirani</surname><given-names>R</given-names></name><name><surname>Friedman</surname><given-names>J</given-names></name></person-group>. <article-title>Overview of supervised learning</article-title>. In: Hastie T, Tibshirani R, Friedman J, editors. <source>The elements of statistical learning</source>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>Springer</publisher-name> (<year>2009</year>). p. <fpage>9</fpage>&#x2013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.1007/978-0-387-84858-7_2</pub-id></citation></ref>
<ref id="B25"><label>25.</label><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Ray</surname><given-names>S</given-names></name></person-group>. <conf-name>A quick review of machine learning algorithms</conf-name>. <conf-name>International conference on machine learning, big data, cloud and parallel computing (COMITCon)</conf-name>; <conf-date>2019 14&#x2013;16 Feb</conf-date> (<year>2019</year>).</citation></ref>
<ref id="B26"><label>26.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Hastie</surname><given-names>T</given-names></name><name><surname>Tibshirani</surname><given-names>R</given-names></name><name><surname>Friedman</surname><given-names>J</given-names></name></person-group>. <article-title>Unsupervised learning</article-title>. In: <person-group person-group-type="author"><name><surname>Hastie</surname><given-names>T</given-names></name><name><surname>Tibshirani</surname><given-names>R</given-names></name><name><surname>Friedman</surname><given-names>J</given-names></name></person-group>, editors <source>The elements of statistical learning</source>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>Springer</publisher-name> (<year>2009</year>). p. <fpage>485</fpage>&#x2013;<lpage>585</lpage>. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1007/978-0-387-84858-7_14">https://doi.org/10.1007/978-0-387-84858-7_14</ext-link></citation></ref>
<ref id="B27"><label>27.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname><given-names>X</given-names></name><name><surname>Goldberg</surname><given-names>AB</given-names></name></person-group>. <article-title>Introduction to semi-supervised learning</article-title>. <source>Synth Lect Artif Intell Mach Learn</source>. (<year>2009</year>) <volume>3</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>130</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-031-01548-9</pub-id></citation></ref>
<ref id="B28"><label>28.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhou</surname><given-names>Z-H</given-names></name></person-group>. <article-title>A brief introduction to weakly supervised learning</article-title>. <source>Natl Sci Rev</source>. (<year>2017</year>) <volume>5</volume>(<issue>1</issue>):<fpage>44</fpage>&#x2013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1093/nsr/nwx106</pub-id></citation></ref>
<ref id="B29"><label>29.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Agatonovic-Kustrin</surname><given-names>S</given-names></name><name><surname>Beresford</surname><given-names>R</given-names></name></person-group>. <article-title>Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research</article-title>. <source>J Pharm Biomed Anal</source>. (<year>2000</year>) <volume>22</volume>(<issue>5</issue>):<fpage>717</fpage>&#x2013;<lpage>27</lpage>. <pub-id pub-id-type="doi">10.1016/S0731-7085(99)00272-1</pub-id><pub-id pub-id-type="pmid">10815714</pub-id></citation></ref>
<ref id="B30"><label>30.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>LeCun</surname><given-names>Y</given-names></name><name><surname>Bengio</surname><given-names>Y</given-names></name><name><surname>Hinton</surname><given-names>G</given-names></name></person-group>. <article-title>Deep learning</article-title>. <source>Nature</source>. (<year>2015</year>) <volume>521</volume>(<issue>7553</issue>):<fpage>436</fpage>&#x2013;<lpage>44</lpage>. <pub-id pub-id-type="doi">10.1038/nature14539</pub-id><pub-id pub-id-type="pmid">26017442</pub-id></citation></ref>
<ref id="B31"><label>31.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Nam</surname><given-names>CS</given-names></name></person-group>. <source>Neuroergonomics: Principles and practice</source>. <publisher-loc>Gewerbestrasse, Switzerland</publisher-loc>: <publisher-name>Springer Nature</publisher-name> (<year>2020</year>). <pub-id pub-id-type="doi">10.1007/978-3-030-34784-0</pub-id></citation></ref>
<ref id="B32"><label>32.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goodfellow</surname><given-names>I</given-names></name><name><surname>Pouget-Abadie</surname><given-names>J</given-names></name><name><surname>Mirza</surname><given-names>M</given-names></name><name><surname>Xu</surname><given-names>B</given-names></name><name><surname>Warde-Farley</surname><given-names>D</given-names></name><name><surname>Ozair</surname><given-names>S</given-names></name><etal/></person-group> <article-title>Generative adversarial nets</article-title>. <source>Adv Neural Inf Process Syst</source>. (<year>2014</year>) <volume>27</volume>. <fpage>2672</fpage>&#x2013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.5555/2969033.2969125</pub-id></citation></ref>
<ref id="B33"><label>33.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Gui</surname><given-names>J</given-names></name><name><surname>Sun</surname><given-names>Z</given-names></name><name><surname>Wen</surname><given-names>Y</given-names></name><name><surname>Tao</surname><given-names>D</given-names></name><name><surname>Ye</surname><given-names>J</given-names></name></person-group>. <comment>A review on generative adversarial networks: algorithms, theory, and applications. arXiv preprint arXiv:200106937</comment> (<year>2020</year>).</citation></ref>
<ref id="B34"><label>34.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aggarwal</surname><given-names>A</given-names></name><name><surname>Mittal</surname><given-names>M</given-names></name><name><surname>Battineni</surname><given-names>G</given-names></name></person-group>. <article-title>Generative adversarial network: an overview of theory and applications</article-title>. <source>Int J Inf Manage Data Insights</source>. (<year>2021</year>): <volume>33</volume>(1):<fpage>100004</fpage>. <pub-id pub-id-type="doi">10.1016/j.jjimei.2020.100004</pub-id></citation></ref>
<ref id="B35"><label>35.</label><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Wu</surname><given-names>J</given-names></name><name><surname>Zhang</surname><given-names>C</given-names></name><name><surname>Xue</surname><given-names>T</given-names></name><name><surname>Freeman</surname><given-names>WT</given-names></name><name><surname>Tenenbaum</surname><given-names>JB</given-names></name></person-group>. <conf-name>Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling</conf-name>. <conf-name>Proceedings of the 30th international conference on neural information processing systems</conf-name> (<year>2016</year>).</citation></ref>
<ref id="B36"><label>36.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schleyer</surname><given-names>TK</given-names></name><name><surname>Thyvalikakath</surname><given-names>TP</given-names></name><name><surname>Spallek</surname><given-names>H</given-names></name><name><surname>Torres-Urquidy</surname><given-names>MH</given-names></name><name><surname>Hernandez</surname><given-names>P</given-names></name><name><surname>Yuhaniak</surname><given-names>J</given-names></name></person-group>. <article-title>Clinical computing in general dentistry</article-title>. <source>J Am Med Inform Assoc</source>. (<year>2006</year>) <volume>13</volume>(<issue>3</issue>):<fpage>344</fpage>&#x2013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1197/jamia.M1990</pub-id><pub-id pub-id-type="pmid">16501177</pub-id></citation></ref>
<ref id="B37"><label>37.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chae</surname><given-names>YM</given-names></name><name><surname>Yoo</surname><given-names>KB</given-names></name><name><surname>Kim</surname><given-names>ES</given-names></name><name><surname>Chae</surname><given-names>H</given-names></name></person-group>. <article-title>The adoption of electronic medical records and decision support systems in Korea</article-title>. <source>Healthc Inform Res</source>. (<year>2011</year>) <volume>17</volume>(<issue>3</issue>):<fpage>172</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.4258/hir.2011.17.3.172</pub-id><pub-id pub-id-type="pmid">22084812</pub-id></citation></ref>
<ref id="B38"><label>38.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Norgeot</surname><given-names>B</given-names></name><name><surname>Quer</surname><given-names>G</given-names></name><name><surname>Beaulieu-Jones</surname><given-names>BK</given-names></name><name><surname>Torkamani</surname><given-names>A</given-names></name><name><surname>Dias</surname><given-names>R</given-names></name><name><surname>Gianfrancesco</surname><given-names>M</given-names></name><etal/></person-group> <article-title>Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist</article-title>. <source>Nat Med</source>. (<year>2020</year>) <volume>26</volume>(<issue>9</issue>):<fpage>1320</fpage>&#x2013;<lpage>4</lpage>. <pub-id pub-id-type="doi">10.1038/s41591-020-1041-y</pub-id><pub-id pub-id-type="pmid">32908275</pub-id></citation></ref>
<ref id="B39"><label>39.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Huang</surname><given-names>Y-P</given-names></name><name><surname>Lee</surname><given-names>S-Y</given-names></name></person-group>. <comment><italic>An Effective and Reliable Methodology for Deep Machine Learning Application in Caries Detection</italic>. medRxiv</comment> (<year>2021</year>).</citation></ref>
<ref id="B40"><label>40.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fukuda</surname><given-names>M</given-names></name><name><surname>Inamoto</surname><given-names>K</given-names></name><name><surname>Shibata</surname><given-names>N</given-names></name><name><surname>Ariji</surname><given-names>Y</given-names></name><name><surname>Yanashita</surname><given-names>Y</given-names></name><name><surname>Kutsuna</surname><given-names>S</given-names></name><etal/></person-group> <article-title>Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography</article-title>. <source>Oral Radiol</source>. (<year>2020</year>) <volume>36</volume>(<issue>4</issue>):<fpage>337</fpage>&#x2013;<lpage>43</lpage>. <pub-id pub-id-type="doi">10.1007/s11282-019-00409-x</pub-id><pub-id pub-id-type="pmid">31535278</pub-id></citation></ref>
<ref id="B41"><label>41.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Vadlamani</surname><given-names>R</given-names></name></person-group>. <source>Application of machine learning technologies for detection of proximal lesions in intraoral digital images: in vitro study</source>. <publisher-loc>Louisville, Kentucky, USA</publisher-loc>: <publisher-name>University of Louisville</publisher-name> (<year>2020</year>). <pub-id pub-id-type="doi">10.18297/etd/3519</pub-id></citation></ref>
<ref id="B42"><label>42.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Setzer</surname><given-names>FC</given-names></name><name><surname>Shi</surname><given-names>KJ</given-names></name><name><surname>Zhang</surname><given-names>Z</given-names></name><name><surname>Yan</surname><given-names>H</given-names></name><name><surname>Yoon</surname><given-names>H</given-names></name><name><surname>Mupparapu</surname><given-names>M</given-names></name><etal/></person-group> <article-title>Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images</article-title>. <source>J Endod</source>. (<year>2020</year>) <volume>46</volume>(<issue>7</issue>):<fpage>987</fpage>&#x2013;<lpage>93</lpage>. <pub-id pub-id-type="doi">10.1016/j.joen.2020.03.025</pub-id><pub-id pub-id-type="pmid">32402466</pub-id></citation></ref>
<ref id="B43"><label>43.</label><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Jaiswal</surname><given-names>P</given-names></name><name><surname>Bhirud</surname><given-names>S</given-names></name></person-group>. <conf-name>Study and analysis of an approach towards the classification of tooth wear in dentistry using machine learning technique</conf-name>. <conf-name>IEEE International conference on technology, research, and innovation for betterment of society (TRIBES)</conf-name> (<year>2021</year>). <publisher-name>IEEE</publisher-name>.</citation></ref>
<ref id="B44"><label>44.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shetty</surname><given-names>H</given-names></name><name><surname>Shetty</surname><given-names>S</given-names></name><name><surname>Kakade</surname><given-names>A</given-names></name><name><surname>Shetty</surname><given-names>A</given-names></name><name><surname>Karobari</surname><given-names>MI</given-names></name><name><surname>Pawar</surname><given-names>AM</given-names></name><etal/></person-group> <article-title>Three-dimensional semi-automated volumetric assessment of the pulp space of teeth following regenerative dental procedures</article-title>. <source>Sci Rep</source>. (<year>2021</year>) <volume>11</volume>(<issue>1</issue>):<fpage>21914</fpage>. <pub-id pub-id-type="doi">10.1038/s41598-021-01489-8</pub-id><pub-id pub-id-type="pmid">34754049</pub-id></citation></ref>
<ref id="B45"><label>45.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname><given-names>J-H</given-names></name><name><surname>Kim</surname><given-names>D-H</given-names></name><name><surname>Jeong</surname><given-names>S-N</given-names></name><name><surname>Choi</surname><given-names>S-H</given-names></name></person-group>. <article-title>Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm</article-title>. <source>J Dent</source>. (<year>2018</year>) <volume>77</volume>:<fpage>106</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1016/j.jdent.2018.07.015</pub-id><pub-id pub-id-type="pmid">30056118</pub-id></citation></ref>
<ref id="B46"><label>46.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>K&#x00FC;hnisch</surname><given-names>J</given-names></name><name><surname>Meyer</surname><given-names>O</given-names></name><name><surname>Hesenius</surname><given-names>M</given-names></name><name><surname>Hickel</surname><given-names>R</given-names></name><name><surname>Gruhn</surname><given-names>V</given-names></name></person-group>. <article-title>Caries detection on intraoral images using artificial intelligence</article-title>. <source>J Dent Res</source>. (<year>2021</year>) <volume>101</volume>(<issue>2</issue>). <pub-id pub-id-type="doi">10.1177/00220345211032524</pub-id>.</citation></ref>
<ref id="B47"><label>47.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schwendicke</surname><given-names>F</given-names></name><name><surname>Rossi</surname><given-names>J</given-names></name><name><surname>G&#x00F6;stemeyer</surname><given-names>G</given-names></name><name><surname>Elhennawy</surname><given-names>K</given-names></name><name><surname>Cantu</surname><given-names>A</given-names></name><name><surname>Gaudin</surname><given-names>R</given-names></name><etal/></person-group> <article-title>Cost-effectiveness of artificial intelligence for proximal caries detection</article-title>. <source>J Dent Res</source>. (<year>2021</year>) <volume>100</volume>(<issue>4</issue>):<fpage>369</fpage>&#x2013;<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1177/0022034520972335</pub-id><pub-id pub-id-type="pmid">33198554</pub-id></citation></ref>
<ref id="B48"><label>48.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>Y-w</given-names></name><name><surname>Stanley</surname><given-names>K</given-names></name><name><surname>Att</surname><given-names>W</given-names></name></person-group>. <article-title>Artificial intelligence in dentistry: current applications and future perspectives</article-title>. <source>Quintessence Int</source>. (<year>2020</year>) <volume>51</volume>(<issue>3</issue>):<fpage>248</fpage>&#x2013;<lpage>57</lpage>. <pub-id pub-id-type="doi">10.3290/j.qi.a43952</pub-id><pub-id pub-id-type="pmid">32020135</pub-id></citation></ref>
<ref id="B49"><label>49.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tonetti</surname><given-names>MS</given-names></name><name><surname>Jepsen</surname><given-names>S</given-names></name><name><surname>Jin</surname><given-names>L</given-names></name><name><surname>Otomo-Corgel</surname><given-names>J</given-names></name></person-group>. <article-title>Impact of the global burden of periodontal diseases on health, nutrition and wellbeing of mankind: a call for global action</article-title>. <source>J Clin Periodontol</source>. (<year>2017</year>) <volume>44</volume>(<issue>5</issue>):<fpage>456</fpage>&#x2013;<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1111/jcpe.12732</pub-id><pub-id pub-id-type="pmid">28419559</pub-id></citation></ref>
<ref id="B50"><label>50.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krois</surname><given-names>J</given-names></name><name><surname>Ekert</surname><given-names>T</given-names></name><name><surname>Meinhold</surname><given-names>L</given-names></name><name><surname>Golla</surname><given-names>T</given-names></name><name><surname>Kharbot</surname><given-names>B</given-names></name><name><surname>Wittemeier</surname><given-names>A</given-names></name><etal/></person-group> <article-title>Deep learning for the radiographic detection of periodontal bone loss</article-title>. <source>Sci Rep</source>. (<year>2019</year>) <volume>9</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-019-44839-3</pub-id><pub-id pub-id-type="pmid">30626917</pub-id></citation></ref>
<ref id="B51"><label>51.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname><given-names>E-H</given-names></name><name><surname>Kim</surname><given-names>S</given-names></name><name><surname>Kim</surname><given-names>H-J</given-names></name><name><surname>Jeong</surname><given-names>H-o</given-names></name><name><surname>Lee</surname><given-names>J</given-names></name><name><surname>Jang</surname><given-names>J</given-names></name><etal/></person-group> <article-title>Prediction of chronic periodontitis severity using machine learning models based on salivary bacterial copy number</article-title>. <source>Front Cell Infect</source>. (<year>2020</year>) <volume>10</volume>:<fpage>698</fpage>. <pub-id pub-id-type="doi">10.3389/fcimb.2020.571515</pub-id></citation></ref>
<ref id="B52"><label>52.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname><given-names>W</given-names></name><name><surname>Wu</surname><given-names>J</given-names></name><name><surname>Mao</surname><given-names>Y</given-names></name><name><surname>Zhu</surname><given-names>S</given-names></name><name><surname>Huang</surname><given-names>GF</given-names></name><name><surname>Petritis</surname><given-names>B</given-names></name><etal/></person-group> <article-title>Developing a periodontal disease antibody array for the prediction of severe periodontal disease using machine learning classifiers</article-title>. <source>J Periodontol</source>. (<year>2020</year>) <volume>91</volume>(<issue>2</issue>):<fpage>232</fpage>&#x2013;<lpage>43</lpage>. <pub-id pub-id-type="doi">10.1002/JPER.19-0173</pub-id><pub-id pub-id-type="pmid">31397883</pub-id></citation></ref>
<ref id="B53"><label>53.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname><given-names>J-H</given-names></name><name><surname>Kim</surname><given-names>D-H</given-names></name><name><surname>Jeong</surname><given-names>S-N</given-names></name><name><surname>Choi</surname><given-names>S-H</given-names></name></person-group>. <article-title>Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm</article-title>. <source>J Periodontal Implant Sci</source>. (<year>2018</year>) <volume>48</volume>(<issue>2</issue>):<fpage>114</fpage>&#x2013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.5051/jpis.2018.48.2.114</pub-id><pub-id pub-id-type="pmid">29770240</pub-id></citation></ref>
<ref id="B54"><label>54.</label><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Yauney</surname><given-names>G</given-names></name><name><surname>Rana</surname><given-names>A</given-names></name><name><surname>Wong</surname><given-names>LC</given-names></name><name><surname>Javia</surname><given-names>P</given-names></name><name><surname>Muftu</surname><given-names>A</given-names></name><name><surname>Shah</surname><given-names>P</given-names></name></person-group>. <conf-name>Automated process incorporating machine learning segmentation and correlation of oral diseases with systemic health</conf-name>. <conf-name>41st Annual international conference of the IEEE engineering in medicine and biology society (EMBC)</conf-name> (<year>2019</year>). <publisher-name>IEEE</publisher-name>.</citation></ref>
<ref id="B55"><label>55.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Proffita</surname><given-names>WR</given-names></name></person-group>. <article-title>The evolution of orthodontics to a data-based specialty</article-title>. <source>Am J Orthod Dentofacial Orthop</source>. (<year>2000</year>) <volume>117</volume>(<issue>5</issue>):<fpage>545</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1016/S0889-5406(00)70194-6</pub-id><pub-id pub-id-type="pmid">10799109</pub-id></citation></ref>
<ref id="B56"><label>56.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tanikawa</surname><given-names>C</given-names></name><name><surname>Yamashiro</surname><given-names>T</given-names></name></person-group>. <article-title>Development of novel artificial intelligence systems to predict facial morphology after orthognathic surgery and orthodontic treatment in Japanese patients</article-title>. <source>Sci Rep</source>. (<year>2021</year>) <volume>11</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-020-79139-8</pub-id><pub-id pub-id-type="pmid">33414495</pub-id></citation></ref>
<ref id="B57"><label>57.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thanathornwong</surname><given-names>B</given-names></name></person-group>. <article-title>Bayesian-based decision support system for assessing the needs for orthodontic treatment</article-title>. <source>Healthc Inform Res</source>. (<year>2018</year>) <volume>24</volume>(<issue>1</issue>):<fpage>22</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.4258/hir.2018.24.1.22</pub-id><pub-id pub-id-type="pmid">29503749</pub-id></citation></ref>
<ref id="B58"><label>58.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xie</surname><given-names>X</given-names></name><name><surname>Wang</surname><given-names>L</given-names></name><name><surname>Wang</surname><given-names>A</given-names></name></person-group>. <article-title>Artificial neural network modeling for deciding if extractions are necessary prior to orthodontic treatment</article-title>. <source>Angle Orthod</source>. (<year>2010</year>) <volume>80</volume>(<issue>2</issue>):<fpage>262</fpage>&#x2013;<lpage>6</lpage>. <pub-id pub-id-type="doi">10.2319/111608-588.1</pub-id><pub-id pub-id-type="pmid">19905850</pub-id></citation></ref>
<ref id="B59"><label>59.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jung</surname><given-names>S-K</given-names></name><name><surname>Kim</surname><given-names>T-W</given-names></name></person-group>. <article-title>New approach for the diagnosis of extractions with neural network machine learning</article-title>. <source>Am J Orthod Dentofacial Orthop</source>. (<year>2016</year>) <volume>149</volume>(<issue>1</issue>):<fpage>127</fpage>&#x2013;<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1016/j.ajodo.2015.07.030</pub-id><pub-id pub-id-type="pmid">26718386</pub-id></citation></ref>
<ref id="B60"><label>60.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Park</surname><given-names>J-H</given-names></name><name><surname>Hwang</surname><given-names>H-W</given-names></name><name><surname>Moon</surname><given-names>J-H</given-names></name><name><surname>Yu</surname><given-names>Y</given-names></name><name><surname>Kim</surname><given-names>H</given-names></name><name><surname>Her</surname><given-names>S-B</given-names></name><etal/></person-group> <article-title>Automated identification of cephalometric landmarks: part 1&#x2014;comparisons between the latest deep-learning methods YOLOV3 and SSD</article-title>. <source>Angle Orthod</source>. (<year>2019</year>) <volume>89</volume>(<issue>6</issue>):<fpage>903</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.2319/022019-127.1</pub-id><pub-id pub-id-type="pmid">31282738</pub-id></citation></ref>
<ref id="B61"><label>61.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hwang</surname><given-names>H-W</given-names></name><name><surname>Park</surname><given-names>J-H</given-names></name><name><surname>Moon</surname><given-names>J-H</given-names></name><name><surname>Yu</surname><given-names>Y</given-names></name><name><surname>Kim</surname><given-names>H</given-names></name><name><surname>Her</surname><given-names>S-B</given-names></name><etal/></person-group> <article-title>Automated identification of cephalometric landmarks: part 2-might it be better than human?</article-title> <source>Angle Orthod</source>. (<year>2020</year>) <volume>90</volume>(<issue>1</issue>):<fpage>69</fpage>&#x2013;<lpage>76</lpage>. <pub-id pub-id-type="doi">10.2319/022019-129.1</pub-id><pub-id pub-id-type="pmid">31335162</pub-id></citation></ref>
<ref id="B62"><label>62.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wei</surname><given-names>G</given-names></name><name><surname>Cui</surname><given-names>Z</given-names></name><name><surname>Zhu</surname><given-names>J</given-names></name><name><surname>Yang</surname><given-names>L</given-names></name><name><surname>Zhou</surname><given-names>Y</given-names></name><name><surname>Singh</surname><given-names>P</given-names></name><etal/></person-group> <article-title>Dense representative tooth landmark/axis detection network on 3D model</article-title>. <source>Comput Aided Geom Des</source>. (<year>2022</year>) <volume>94</volume>:<fpage>102077</fpage>. <pub-id pub-id-type="doi">10.1016/j.cagd.2022.102077</pub-id></citation></ref>
<ref id="B63"><label>63.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname><given-names>H</given-names></name><name><surname>Cho</surname><given-names>S</given-names></name><name><surname>Kim</surname><given-names>M</given-names></name><name><surname>Kim</surname><given-names>W</given-names></name><name><surname>Kim</surname><given-names>J</given-names></name><name><surname>Choi</surname><given-names>J</given-names></name></person-group>. <article-title>Automated skeletal classification with lateral cephalometry based on artificial intelligence</article-title>. <source>J Dent Res</source>. (<year>2020</year>) <volume>99</volume>(<issue>3</issue>):<fpage>249</fpage>&#x2013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1177/0022034520901715</pub-id><pub-id pub-id-type="pmid">31977286</pub-id></citation></ref>
<ref id="B64"><label>64.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Choi</surname><given-names>H-I</given-names></name><name><surname>Jung</surname><given-names>S-K</given-names></name><name><surname>Baek</surname><given-names>S-H</given-names></name><name><surname>Lim</surname><given-names>WH</given-names></name><name><surname>Ahn</surname><given-names>S-J</given-names></name><name><surname>Yang</surname><given-names>I-H</given-names></name><etal/></person-group> <article-title>Artificial intelligent model with neural network machine learning for the diagnosis of orthognathic surgery</article-title>. <source>J Craniofac Surg</source>. (<year>2019</year>) <volume>30</volume>(<issue>7</issue>):<fpage>1986</fpage>&#x2013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1097/SCS.0000000000005650</pub-id><pub-id pub-id-type="pmid">31205280</pub-id></citation></ref>
<ref id="B65"><label>65.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cui</surname><given-names>Z</given-names></name><name><surname>Li</surname><given-names>C</given-names></name><name><surname>Chen</surname><given-names>N</given-names></name><name><surname>Wei</surname><given-names>G</given-names></name><name><surname>Chen</surname><given-names>R</given-names></name><name><surname>Zhou</surname><given-names>Y</given-names></name><etal/></person-group> <article-title>TSegnet: an efficient and accurate tooth segmentation network on 3D dental model</article-title>. <source>Med Image Anal</source>. (<year>2021</year>) <volume>69</volume>:<fpage>101949</fpage>. <pub-id pub-id-type="doi">10.1016/j.media.2020.101949</pub-id><pub-id pub-id-type="pmid">33387908</pub-id></citation></ref>
<ref id="B66"><label>66.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cui</surname><given-names>Z</given-names></name><name><surname>Fang</surname><given-names>Y</given-names></name><name><surname>Mei</surname><given-names>L</given-names></name><name><surname>Zhang</surname><given-names>B</given-names></name><name><surname>Yu</surname><given-names>B</given-names></name><name><surname>Liu</surname><given-names>J</given-names></name><etal/></person-group> <article-title>A fully automatic AI system for tooth and alveolar bone segmentation from cone-beam CT images</article-title>. <source>Nat Commun</source>. (<year>2022</year>) <volume>13</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1038/s41467-022-29637-2</pub-id><pub-id pub-id-type="pmid">34983933</pub-id></citation></ref>
<ref id="B67"><label>67.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Junaid</surname><given-names>N</given-names></name><name><surname>Khan</surname><given-names>N</given-names></name><name><surname>Ahmed</surname><given-names>N</given-names></name><name><surname>Abbasi</surname><given-names>MS</given-names></name><name><surname>Das</surname><given-names>G</given-names></name><name><surname>Maqsood</surname><given-names>A</given-names></name></person-group>. <article-title>Development, application, and performance of artificial intelligence in cephalometric landmark identification and diagnosis: a systematic review</article-title>. <source>Healthcare</source>. (<year>2022</year>) <volume>10</volume>(<issue>12</issue>):<fpage>2454</fpage>. <pub-id pub-id-type="doi">10.3390/healthcare10122454</pub-id><pub-id pub-id-type="pmid">36553978</pub-id></citation></ref>
<ref id="B68"><label>68.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bulatova</surname><given-names>G</given-names></name><name><surname>Kusnoto</surname><given-names>B</given-names></name><name><surname>Grace</surname><given-names>V</given-names></name><name><surname>Tsay</surname><given-names>TP</given-names></name><name><surname>Avenetti</surname><given-names>DM</given-names></name><name><surname>Sanchez</surname><given-names>FJC</given-names></name></person-group>. <article-title>Assessment of automatic cephalometric landmark identification using artificial intelligence</article-title>. <source>Orthod Craniofac Res</source>. (<year>2021</year>) <volume>24</volume>:<fpage>37</fpage>&#x2013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1111/ocr.12542</pub-id><pub-id pub-id-type="pmid">34842346</pub-id></citation></ref>
<ref id="B69"><label>69.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kunz</surname><given-names>F</given-names></name><name><surname>Stellzig-Eisenhauer</surname><given-names>A</given-names></name><name><surname>Zeman</surname><given-names>F</given-names></name><name><surname>Boldt</surname><given-names>J</given-names></name></person-group>. <article-title>Artificial intelligence in orthodontics: evaluation of a fully automated cephalometric analysis using a customized convolutional neural network</article-title>. <source>J Orofac Orthop</source>. (<year>2020</year>) <volume>81</volume>(<issue>1</issue>):<fpage>52</fpage>&#x2013;<lpage>68</lpage>. <pub-id pub-id-type="doi">10.1007/s00056-019-00203-8</pub-id><pub-id pub-id-type="pmid">31853586</pub-id></citation></ref>
<ref id="B70"><label>70.</label><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Cui</surname><given-names>Z</given-names></name><name><surname>Zhang</surname><given-names>B</given-names></name><name><surname>Lian</surname><given-names>C</given-names></name><name><surname>Li</surname><given-names>C</given-names></name><name><surname>Yang</surname><given-names>L</given-names></name><name><surname>Wang</surname><given-names>W</given-names></name><etal/></person-group> <conf-name>Hierarchical morphology-guided tooth instance segmentation from CBCT images</conf-name>. <conf-name>International conference on information processing in medical imaging</conf-name> (<year>2021</year>), <publisher-name>Springer</publisher-name>.</citation></ref>
<ref id="B71"><label>71.</label><citation citation-type="other"><collab>World Health Organization</collab>. <comment>Cancer Prevention [Available from:</comment> <ext-link ext-link-type="uri" xlink:href="https://www.who.int/cancer/prevention/diagnosis-screening/oral-cancer/en/">https://www.who.int/cancer/prevention/diagnosis-screening/oral-cancer/en/</ext-link></citation></ref>
<ref id="B72"><label>72.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Choi</surname><given-names>E</given-names></name><name><surname>Lee</surname><given-names>S</given-names></name><name><surname>Jeong</surname><given-names>E</given-names></name><name><surname>Shin</surname><given-names>S</given-names></name><name><surname>Park</surname><given-names>H</given-names></name><name><surname>Youm</surname><given-names>S</given-names></name><etal/></person-group> <article-title>Artificial intelligence in positioning between mandibular third molar and inferior alveolar nerve on panoramic radiography</article-title>. <source>Sci Rep</source>. (<year>2022</year>) <volume>12</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-021-99269-x</pub-id><pub-id pub-id-type="pmid">34992227</pub-id></citation></ref>
<ref id="B73"><label>73.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aubreville</surname><given-names>M</given-names></name><name><surname>Knipfer</surname><given-names>C</given-names></name><name><surname>Oetter</surname><given-names>N</given-names></name><name><surname>Jaremenko</surname><given-names>C</given-names></name><name><surname>Rodner</surname><given-names>E</given-names></name><name><surname>Denzler</surname><given-names>J</given-names></name><etal/></person-group> <article-title>Automatic classification of cancerous tissue in laserendomicroscopy images of the oral cavity using deep learning</article-title>. <source>Sci Rep</source>. (<year>2017</year>) <volume>7</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-017-12320-8</pub-id><pub-id pub-id-type="pmid">28127051</pub-id></citation></ref>
<ref id="B74"><label>74.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Warin</surname><given-names>K</given-names></name><name><surname>Limprasert</surname><given-names>W</given-names></name><name><surname>Suebnukarn</surname><given-names>S</given-names></name><name><surname>Jinaporntham</surname><given-names>S</given-names></name><name><surname>Jantana</surname><given-names>P</given-names></name><name><surname>Vicharueang</surname><given-names>S</given-names></name></person-group>. <article-title>AI-based analysis of oral lesions using novel deep convolutional neural networks for early detection of oral cancer</article-title>. <source>PLoS One</source>. (<year>2022</year>) <volume>17</volume>(<issue>8</issue>):<fpage>e0273508</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0273508</pub-id><pub-id pub-id-type="pmid">36001628</pub-id></citation></ref>
<ref id="B75"><label>75.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>James</surname><given-names>BL</given-names></name><name><surname>Sunny</surname><given-names>SP</given-names></name><name><surname>Heidari</surname><given-names>AE</given-names></name><name><surname>Ramanjinappa</surname><given-names>RD</given-names></name><name><surname>Lam</surname><given-names>T</given-names></name><name><surname>Tran</surname><given-names>AV</given-names></name><etal/></person-group> <article-title>Validation of a point-of-care optical coherence tomography device with machine learning algorithm for detection of oral potentially malignant and malignant lesions</article-title>. <source>Cancers</source>. (<year>2021</year>) <volume>13</volume>(<issue>14</issue>):<fpage>3583</fpage>. <pub-id pub-id-type="doi">10.3390/cancers13143583</pub-id><pub-id pub-id-type="pmid">34298796</pub-id></citation></ref>
<ref id="B76"><label>76.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Heidari</surname><given-names>AE</given-names></name><name><surname>Pham</surname><given-names>TT</given-names></name><name><surname>Ifegwu</surname><given-names>I</given-names></name><name><surname>Burwell</surname><given-names>R</given-names></name><name><surname>Armstrong</surname><given-names>WB</given-names></name><name><surname>Tjoson</surname><given-names>T</given-names></name><etal/></person-group> <article-title>The use of optical coherence tomography and convolutional neural networks to distinguish normal and abnormal oral mucosa</article-title>. <source>J Biophotonics</source>. (<year>2020</year>) <volume>13</volume>(<issue>3</issue>):<fpage>e201900221</fpage>. <pub-id pub-id-type="doi">10.1002/jbio.201900221</pub-id><pub-id pub-id-type="pmid">31710775</pub-id></citation></ref>
<ref id="B77"><label>77.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poedjiastoeti</surname><given-names>W</given-names></name><name><surname>Suebnukarn</surname><given-names>S</given-names></name></person-group>. <article-title>Application of convolutional neural network in the diagnosis of jaw tumors</article-title>. <source>Healthc Inform Res</source>. (<year>2018</year>) <volume>24</volume>(<issue>3</issue>):<fpage>236</fpage>&#x2013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.4258/hir.2018.24.3.236</pub-id><pub-id pub-id-type="pmid">30109156</pub-id></citation></ref>
<ref id="B78"><label>78.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Xu</surname><given-names>B</given-names></name><name><surname>Wang</surname><given-names>N</given-names></name><name><surname>Chen</surname><given-names>T</given-names></name><name><surname>Li</surname><given-names>M</given-names></name></person-group>. <comment>Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:150500853</comment> (<year>2015</year>).</citation></ref>
<ref id="B79"><label>79.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dhillon</surname><given-names>H</given-names></name><name><surname>Chaudhari</surname><given-names>PK</given-names></name><name><surname>Dhingra</surname><given-names>K</given-names></name><name><surname>Kuo</surname><given-names>R-F</given-names></name><name><surname>Sokhi</surname><given-names>RK</given-names></name><name><surname>Alam</surname><given-names>MK</given-names></name><etal/></person-group> <article-title>Current applications of artificial intelligence in cleft care: a scoping review</article-title>. <source>Front Med</source>. (<year>2021</year>) <volume>8</volume>:1&#x2013;14. <pub-id pub-id-type="doi">10.3389/fmed.2021.676490</pub-id></citation></ref>
<ref id="B80"><label>80.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chang</surname><given-names>HY</given-names></name><name><surname>Jung</surname><given-names>CK</given-names></name><name><surname>Woo</surname><given-names>JI</given-names></name><name><surname>Lee</surname><given-names>S</given-names></name><name><surname>Cho</surname><given-names>J</given-names></name><name><surname>Kim</surname><given-names>SW</given-names></name><etal/></person-group> <article-title>Artificial intelligence in pathology</article-title>. <source>J Pathol Transl Med</source>. (<year>2019</year>) <volume>53</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.4132/jptm.2018.12.16</pub-id><pub-id pub-id-type="pmid">30599506</pub-id></citation></ref>
<ref id="B81"><label>81.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>Y</given-names></name><name><surname>Lee</surname><given-names>JKY</given-names></name><name><surname>Kwong</surname><given-names>G</given-names></name><name><surname>Pow EHN</surname><given-names>Pow</given-names></name><name><surname>Tsoi</surname><given-names>JKH</given-names></name></person-group>. <comment>Morphology and fracture behavior of lithium disilicate dental crowns designed by human and knowledge-based AI. <italic>J Mech Behav Biomed Mater</italic></comment>. (<year>2022</year>) <volume>131</volume>:105256 <pub-id pub-id-type="doi">10.1016/j.jmbbm.2022.105256</pub-id></citation></ref>
<ref id="B82"><label>82.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Hwang</surname><given-names>J-J</given-names></name><name><surname>Azernikov</surname><given-names>S</given-names></name><name><surname>Efros</surname><given-names>AA</given-names></name><name><surname>Yu</surname><given-names>SX</given-names></name></person-group>. <comment>Learning beyond human expertise with generative models for dental restorations. arXiv preprint arXiv:180400064</comment> (<year>2018</year>).</citation></ref>
<ref id="B83"><label>83.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tian</surname><given-names>S</given-names></name><name><surname>Wang</surname><given-names>M</given-names></name><name><surname>Dai</surname><given-names>N</given-names></name><name><surname>Ma</surname><given-names>H</given-names></name><name><surname>Li</surname><given-names>L</given-names></name><name><surname>Fiorenza</surname><given-names>L</given-names></name><etal/></person-group> <article-title>DCPR-GAN: dental crown prosthesis restoration using two-stage generative adversarial networks</article-title>. <source>IEEE J Biomed Health Inform</source>. (<year>2021</year>) <volume>26</volume>(<issue>1</issue>):<fpage>151</fpage>&#x2013;<lpage>60</lpage>. <pub-id pub-id-type="doi">10.1109/JBHI.2021.3119394</pub-id></citation></ref>
<ref id="B84"><label>84.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Ding</surname><given-names>H</given-names></name><name><surname>Cui</surname><given-names>Z</given-names></name><name><surname>Maghami</surname><given-names>E</given-names></name><name><surname>Chen</surname><given-names>Y</given-names></name><name><surname>Matinlinna</surname><given-names>JP</given-names></name><name><surname>Pow</surname><given-names>EHN</given-names></name><etal/></person-group> <article-title>Morphology and mechanical performance of dental crown designed by 3D-DCGAN</article-title>. <source>Dent Mater</source>. (<year>2023</year>) <pub-id pub-id-type="doi">10.1016/j.dental.2023.02.001</pub-id></citation></ref>
<ref id="B85"><label>85.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wei</surname><given-names>J</given-names></name><name><surname>Peng</surname><given-names>M</given-names></name><name><surname>Li</surname><given-names>Q</given-names></name><name><surname>Wang</surname><given-names>Y</given-names></name></person-group>. <article-title>Evaluation of a novel computer color matching system based on the improved back-propagation neural network model</article-title>. <source>J Prosthodont</source>. (<year>2018</year>) <volume>27</volume>(<issue>8</issue>):<fpage>775</fpage>&#x2013;<lpage>83</lpage>. <pub-id pub-id-type="doi">10.1111/jopr.12561</pub-id><pub-id pub-id-type="pmid">27860023</pub-id></citation></ref>
<ref id="B86"><label>86.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yamaguchi</surname><given-names>S</given-names></name><name><surname>Lee</surname><given-names>C</given-names></name><name><surname>Karaer</surname><given-names>O</given-names></name><name><surname>Ban</surname><given-names>S</given-names></name><name><surname>Mine</surname><given-names>A</given-names></name><name><surname>Imazato</surname><given-names>S</given-names></name></person-group>. <article-title>Predicting the debonding of CAD/CAM composite resin crowns with AI</article-title>. <source>J Dent Res</source>. (<year>2019</year>) <volume>98</volume>(<issue>11</issue>):<fpage>1234</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1177/0022034519867641</pub-id><pub-id pub-id-type="pmid">31379234</pub-id></citation></ref>
<ref id="B87"><label>87.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Takahashi</surname><given-names>T</given-names></name><name><surname>Nozaki</surname><given-names>K</given-names></name><name><surname>Gonda</surname><given-names>T</given-names></name><name><surname>Ikebe</surname><given-names>K</given-names></name></person-group>. <article-title>A system for designing removable partial dentures using artificial intelligence. Part 1. Classification of partially edentulous arches using a convolutional neural network</article-title>. <source>J Prosthodont Res</source>. (<year>2021</year>) <volume>65</volume>(<issue>1</issue>):<fpage>115</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.2186/jpr.JPOR_2019_354</pub-id><pub-id pub-id-type="pmid">32938860</pub-id></citation></ref>
<ref id="B88"><label>88.</label><citation citation-type="book"><person-group person-group-type="author"><name><surname>Rokaya</surname><given-names>D</given-names></name><name><surname>Kongkiatkamon</surname><given-names>S</given-names></name><name><surname>Heboyan</surname><given-names>A</given-names></name><name><surname>Dam</surname><given-names>VV</given-names></name><name><surname>Amornvit</surname><given-names>P</given-names></name><name><surname>Khurshid</surname><given-names>Z</given-names></name><etal/></person-group> <article-title>3D-Printed Biomaterials in biomedical application</article-title>. In: <person-group person-group-type="editor"><name><surname>Jana</surname><given-names>S</given-names></name><name><surname>Jana</surname><given-names>S</given-names></name></person-group>, editors. <source>Functional biomaterials: drug delivery and biomedical applications</source>. <publisher-loc>Singapore</publisher-loc>: <publisher-name>Springer Singapore</publisher-name> (<year>2022</year>). p. <fpage>319</fpage>&#x2013;<lpage>39</lpage>.</citation></ref>
<ref id="B89"><label>89.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sporring</surname><given-names>J</given-names></name><name><surname>Hommelhoff Jensen</surname><given-names>K</given-names></name></person-group>. <article-title>Bayes Reconstruction of missing teeth</article-title>. <source>J Math Imaging Vis</source>. (<year>2008</year>) <volume>31</volume>(<issue>2</issue>):<fpage>245</fpage>&#x2013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1007/s10851-008-0081-6</pub-id></citation></ref>
<ref id="B90"><label>90.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname><given-names>J</given-names></name><name><surname>Xia</surname><given-names>JJ</given-names></name><name><surname>Li</surname><given-names>J</given-names></name><name><surname>Zhou</surname><given-names>X</given-names></name></person-group>. <article-title>Reconstruction-Based digital dental occlusion of the partially edentulous dentition</article-title>. <source>IEEE J Biomed Health Inform</source>. (<year>2017</year>) <volume>21</volume>(<issue>1</issue>):<fpage>201</fpage>&#x2013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1109/JBHI.2015.2500191</pub-id><pub-id pub-id-type="pmid">26584502</pub-id></citation></ref>
<ref id="B91"><label>91.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>Q</given-names></name><name><surname>Lin</surname><given-names>S</given-names></name><name><surname>Wu</surname><given-names>J</given-names></name><name><surname>Lyu</surname><given-names>P</given-names></name><name><surname>Zhou</surname><given-names>Y</given-names></name></person-group>. <article-title>Automatic drawing of customized removable partial denture diagrams based on textual design for the clinical decision support system</article-title>. <source>J Oral Sci</source>. (<year>2020</year>) <volume>62</volume>(<issue>2</issue>):<fpage>236</fpage>&#x2013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.2334/josnusd.19-0138</pub-id><pub-id pub-id-type="pmid">32161232</pub-id></citation></ref>
<ref id="B92"><label>92.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cheng</surname><given-names>C</given-names></name><name><surname>Cheng</surname><given-names>X</given-names></name><name><surname>Dai</surname><given-names>N</given-names></name><name><surname>Jiang</surname><given-names>X</given-names></name><name><surname>Sun</surname><given-names>Y</given-names></name><name><surname>Li</surname><given-names>W</given-names></name></person-group>. <article-title>Prediction of facial deformation after complete denture prosthesis using BP neural network</article-title>. <source>Comput Biol Med</source>. (<year>2015</year>) <volume>66</volume>:<fpage>103</fpage>&#x2013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2015.08.018</pub-id><pub-id pub-id-type="pmid">26386549</pub-id></citation></ref>
<ref id="B93"><label>93.</label><citation citation-type="other"><collab>American Dental Association</collab>. <comment>Policy on Evidence-Based Dentistry (2001). Available at:</comment> <ext-link ext-link-type="uri" xlink:href="https://www.ada.org/en/about-the-ada/ada-positions-policies-and-statements/policy-on-evidence-based-dentistry">https://www.ada.org/en/about-the-ada/ada-positions-policies-and-statements/policy-on-evidence-based-dentistry</ext-link></citation></ref>
<ref id="B94"><label>94.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rieke</surname><given-names>N</given-names></name><name><surname>Hancox</surname><given-names>J</given-names></name><name><surname>Li</surname><given-names>W</given-names></name><name><surname>Milletar&#x00EC;</surname><given-names>F</given-names></name><name><surname>Roth</surname><given-names>HR</given-names></name><name><surname>Albarqouni</surname><given-names>S</given-names></name><etal/></person-group> <article-title>The future of digital health with federated learning</article-title>. <source>NPJ Digit Med</source>. (<year>2020</year>) <volume>3</volume>(<issue>1</issue>):<fpage>119</fpage>. <pub-id pub-id-type="doi">10.1038/s41746-020-00323-1</pub-id><pub-id pub-id-type="pmid">33015372</pub-id></citation></ref>
<ref id="B95"><label>95.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schwendicke</surname><given-names>F</given-names></name><name><surname>Samek</surname><given-names>W</given-names></name><name><surname>Krois</surname><given-names>J</given-names></name></person-group>. <article-title>Artificial intelligence in dentistry: chances and challenges</article-title>. <source>J Dent Res</source>. (<year>2020</year>) <volume>99</volume>(<issue>7</issue>):<fpage>769</fpage>&#x2013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1177/0022034520915714</pub-id><pub-id pub-id-type="pmid">32315260</pub-id></citation></ref>
<ref id="B96"><label>96.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Ge</surname><given-names>Y</given-names></name><name><surname>Guo</surname><given-names>Y</given-names></name><name><surname>Yang</surname><given-names>Y-C</given-names></name><name><surname>Al-Garadi</surname><given-names>MA</given-names></name><name><surname>Sarker</surname><given-names>A</given-names></name></person-group>. <comment>Few-shot learning for medical text: A systematic review. arXiv preprint arXiv:220414081</comment> (<year>2022</year>).</citation></ref>
<ref id="B97"><label>97.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vest</surname><given-names>JR</given-names></name><name><surname>Gamm</surname><given-names>LD</given-names></name></person-group>. <article-title>Health information exchange: persistent challenges and new strategies</article-title>. <source>J Am Med Inform Assoc</source>. (<year>2010</year>) <volume>17</volume>(<issue>3</issue>):<fpage>288</fpage>&#x2013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1136/jamia.2010.003673</pub-id><pub-id pub-id-type="pmid">20442146</pub-id></citation></ref>
<ref id="B98"><label>98.</label><citation citation-type="other"><person-group person-group-type="author"><name><surname>Zhang</surname><given-names>Y</given-names></name><name><surname>Kang</surname><given-names>B</given-names></name><name><surname>Hooi</surname><given-names>B</given-names></name><name><surname>Yan</surname><given-names>S</given-names></name><name><surname>Feng</surname><given-names>J</given-names></name></person-group>. <comment>Deep long-tailed learning: A survey. arXiv preprint arXiv:211004596</comment> (<year>2021</year>).</citation></ref>
<ref id="B99"><label>99.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hagras</surname><given-names>H</given-names></name></person-group>. <article-title>Toward human-understandable, explainable AI</article-title>. <source>Computer</source>. (<year>2018</year>) <volume>51</volume>(<issue>9</issue>):<fpage>28</fpage>&#x2013;<lpage>36</lpage>. <pub-id pub-id-type="doi">10.1109/MC.2018.3620965</pub-id></citation></ref>
<ref id="B100"><label>100.</label><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deo</surname><given-names>RC</given-names></name></person-group>. <article-title>Machine learning in medicine</article-title>. <source>Circulation</source>. (<year>2015</year>) <volume>132</volume>(<issue>20</issue>):<fpage>1920</fpage>&#x2013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1161/CIRCULATIONAHA.115.001593</pub-id><pub-id pub-id-type="pmid">26572668</pub-id></citation></ref></ref-list>
</back>
</article>