<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Big Data</journal-id>
<journal-title>Frontiers in Big Data</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Big Data</abbrev-journal-title>
<issn pub-type="epub">2624-909X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fdata.2022.1025806</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Big Data</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Apparent age prediction from faces: A survey of modern approaches</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Agbo-Ajala</surname> <given-names>Olatunbosun</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Viriri</surname> <given-names>Serestina</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1335032/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Oloko-Oba</surname> <given-names>Mustapha</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1590353/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ekundayo</surname> <given-names>Olufisayo</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Heymann</surname> <given-names>Reolyn</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Computer Science Discipline, School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal</institution>, <addr-line>Durban</addr-line>, <country>South Africa</country></aff>
<aff id="aff2"><sup>2</sup><institution>Electrical and Electronic Engineering Science, University of Johannesburg</institution>, <addr-line>Johannesburg</addr-line>, <country>South Africa</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: M. Hassaballah, South Valley University, Egypt</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Khulumani Sibanda, University of Fort Hare, South Africa; Ayodele Adebiyi, Covenant University, Nigeria</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Serestina Viriri <email>viriris&#x00040;ukzn.ac.za</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Machine Learning and Artificial Intelligence, a section of the journal Frontiers in Big Data</p></fn></author-notes>
<pub-date pub-type="epub">
<day>26</day>
<month>10</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>5</volume>
<elocation-id>1025806</elocation-id>
<history>
<date date-type="received">
<day>23</day>
<month>08</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>28</day>
<month>09</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 Agbo-Ajala, Viriri, Oloko-Oba, Ekundayo and Heymann.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Agbo-Ajala, Viriri, Oloko-Oba, Ekundayo and Heymann</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Apparent age estimation <italic>via</italic> human face image has attracted increased attention due to its numerous real-world applications. Predicting the apparent age has been quite difficult for machines and humans. However, researchers have focused on machine estimation of &#x0201C;age as perceived&#x0201D; to a high level of accuracy. To further improve the performance of apparent age estimation from the facial image, researchers continue to examine different methods to enhance its results further. This paper presents a critical review of the modern approaches and techniques for the apparent age estimation task. We also present a comparative analysis of the performance of some of those approaches on the apparent facial aging benchmark. The study also highlights the strengths and weaknesses of each approach used for apparent age estimation to guide in choosing the appropriate algorithms for future work in the field. The work focuses on the most popular algorithms and those that appear to have been the most successful for apparent age estimation to improve on the existing state-of-the-art results. We based our evaluations on three facial aging datasets, including looking at people (LAP)-2015, LAP-2016, and APPA-REAL, the most popular and publicly available datasets benchmark for apparent age estimation.</p></abstract>
<kwd-group>
<kwd>apparent age</kwd>
<kwd>convolutional neural network</kwd>
<kwd>deep learning</kwd>
<kwd>facial aging</kwd>
<kwd>age prediction</kwd>
</kwd-group>
<counts>
<fig-count count="7"/>
<table-count count="3"/>
<equation-count count="2"/>
<ref-count count="51"/>
<page-count count="15"/>
<word-count count="8964"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Age estimation is a very prolific area of research within the computer vision community (Huerta et al., <xref ref-type="bibr" rid="B22">2015</xref>; Drobnyh and Polovinkin, <xref ref-type="bibr" rid="B11">2017</xref>). There has been an increasing interest in age estimation from facial images (Drobnyh and Polovinkin, <xref ref-type="bibr" rid="B11">2017</xref>) due to its increasing demands in various potential applications in security control (Abbas and Kareem, <xref ref-type="bibr" rid="B1">2018</xref>), human-computer interaction (Abbas and Kareem, <xref ref-type="bibr" rid="B1">2018</xref>), social media (Ruiz-Del-Solar et al., <xref ref-type="bibr" rid="B46">2009</xref>), and forensic studies (Bouchrika et al., <xref ref-type="bibr" rid="B5">2016</xref>). Although this subject has been extensively studied, the ability to estimate human ages reliably and correctly from face images is still far from satisfying human performance level (Onifade, <xref ref-type="bibr" rid="B38">2015</xref>). There exist two kinds of facial age estimation: One is real (biological) age estimation, which determines the precise chronological or biological age of a person from the facial image (Shen et al., <xref ref-type="bibr" rid="B47">2018</xref>); the other is apparent age estimation (Agustsson et al., <xref ref-type="bibr" rid="B2">2017</xref>), which focuses on &#x0201C;how old does a person looks like&#x0201D; rather than predicting the real or biological age. The difference between the traditional real age estimation and apparent age estimation is that the age labels in apparent are annotated by human assessors rather than the real biological age. Some people may appear younger than their real age while others may appear older. As a result, the real age may differ from the apparent age of each subject.</p>
<p>Several methods have been proposed for apparent age estimation. The availability of huge data for training and an increase in computational power has made deep learning with convolutional neural network (CNN) a better method for the estimation task. Many researchers have studied several of these CNN methods, and these methods have improved the results and performances of apparent age estimation tasks. However, due to the challenging nature of apparent age estimation, further attempts to enhance the accuracy of the age estimation are still very much in progress. Researchers continue to examine different CNN and modern methods to enhance the results further. Hence, this paper critically reviews the modern approaches and techniques employed for apparent age estimation. We also present a comparative analysis of the performance of some of those approaches on standard apparent age datasets. The study also highlights the strengths and weaknesses of each approach to apparent age estimation to guide in choosing the appropriate approach to further improve the existing state-of-the-art results in the field. To ensure fairness in evaluating the performance of these approaches, we employed the popular apparent aging datasets and standard evaluation metrics that are widely used in literature in age estimation. <xref ref-type="fig" rid="F1">Figure 1</xref> displays the overall idea of a typical apparent age estimation system.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>A typical apparent age estimation system. An age estimation system follows a general process that includes face detection, image preprocessing (landmark detection and face alignment), feature extraction (extracting the useful features from the input image), and classification itself.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-05-1025806-g0001.tif"/>
</fig>
<p>The contributions of this paper are highlighted as follows:</p>
<list list-type="order">
<list-item><p>We outlined different state-of-the-art algorithms and techniques for apparent age estimation.</p></list-item>
<list-item><p>We described the performance evaluation analysis of different state-of-the-art models in apparent age estimation.</p></list-item>
<list-item><p>We presented three facial aging datasets widely employed in the research of apparent age estimation.</p></list-item>
<list-item><p>We also highlighted the standard performance evaluation metrics common in most literature for the apparent age estimation.</p></list-item>
</list>
</sec>
<sec id="s2">
<title>2. Application areas for apparent age estimation</title>
<p>Apparent age estimation has many notable real-world applications. Different intelligent application scenarios can benefit from computer-based systems that predict the apparent age of people among such application areas, including the following:</p>
<sec>
<title>2.1. Medical diagnosis</title>
<p>Computer-based age prediction, as perceived by people, helps determine whether factors like the environment, depression, sickness, fatigue, and stress affect the premature aging of a person. This automatic age prediction will assist in obtaining the required information needed on a decision to improve the person&#x00027;s aging system (Escalera et al., <xref ref-type="bibr" rid="B14">2015</xref>; Agustsson et al., <xref ref-type="bibr" rid="B2">2017</xref>).</p>
</sec>
<sec>
<title>2.2. Effect of anti-aging treatment</title>
<p>Automatic apparent age estimation is also valuable for knowing the effect of some anti-aging treatments on people. The effectiveness of these anti-aging treatments, like topical treatment and hormone replacement therapy, can only be understood if an apparent age estimator is in place (Escalera et al., <xref ref-type="bibr" rid="B14">2015</xref>; Rothe et al., <xref ref-type="bibr" rid="B45">2018</xref>).</p>
</sec>
<sec>
<title>2.3. Facial beauty product development</title>
<p>The effect of some cosmetics products on facial beauty product development can only be discovered with an accurate apparent age predictor. It helps to bring customer insight, marketing story, and aesthetic experience to their product. The estimator assists in determining the best element for the formula&#x00027;s dispensing to deliver in the future for desirables and best products (Padme and Desai, <xref ref-type="bibr" rid="B40">2015</xref>; Rothe et al., <xref ref-type="bibr" rid="B45">2018</xref>).</p>
</sec>
<sec>
<title>2.4. Effect of plastic surgery</title>
<p>The essence of plastic surgery procedures is to reshape and restore the appearance of a person&#x00027;s body. The surgery is connected with beautification ideas, which should involve an extensive range of practical operations, including craniofacial surgery, reconstructive surgery, etc. However, to know the impact of plastic surgery procedures, there is a need for an automated system that determines &#x0201C;how old a person is like?&#x0201D; (Fu et al., <xref ref-type="bibr" rid="B17">2010</xref>; Voelkle et al., <xref ref-type="bibr" rid="B48">2012</xref>).</p>
</sec>
<sec>
<title>2.5. Movie role casting</title>
<p>An apparent age estimator also plays a role in selecting roles cast in movies, television programs, music videos, stage plays, video documentaries, and television advertisements, among others. In choosing a particular type of an actor, actress, singer, or dancer, for a specific role and character, the need to determine the person&#x00027;s age as perceived by people will be necessary (Padme and Desai, <xref ref-type="bibr" rid="B40">2015</xref>; Rothe et al., <xref ref-type="bibr" rid="B45">2018</xref>).</p>
</sec>
</sec>
<sec id="s3">
<title>3. Description of apparent age estimation algorithms</title>
<p>In this section, we present different algorithms and techniques used for apparent age estimation. As shown in <xref ref-type="fig" rid="F2">Figure 2</xref>, most of these techniques fall into five different categories. Apparent age estimation can be modeled as a multi-class classification (MC), metric regression (MR), ranking, deep label distribution learning (DLDL), or a hybrid (combination of two or more techniques). We present a description of these algorithms and suggest the most effective approach in our opinion.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Classification of apparent age estimation approaches. The typical apparent age estimation methods are categorized into five different algorithms.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-05-1025806-g0002.tif"/>
</fig>
<sec>
<title>3.1. Multi-class classification</title>
<p>Multi-class classification approach views the ages or age groups category as an independent label and treats age value as a separate category and learns the age classifier to infer the person&#x00027;s age (Zhu et al., <xref ref-type="bibr" rid="B51">2015</xref>; Malli et al., <xref ref-type="bibr" rid="B36">2016</xref>; Feng et al., <xref ref-type="bibr" rid="B16">2017</xref>). MC algorithm maximizes the probability of ground-truth class labels by not considering other classes. Nevertheless, the limited training samples and the class imbalance among most facial aging datasets can result in an overfitting problem (Gao et al., <xref ref-type="bibr" rid="B19">2018</xref>).</p>
</sec>
<sec>
<title>3.2. Metric regression</title>
<p>The metric regression-based algorithm views the age class as a linearly progressing relationship and does not display the diversity of the aging method. It learns the trait most appropriate for mapping the age-value space from the feature space using the appropriate regularization method. Although, it is quite normal to address the age estimation task as an MR problem, which does minimize the mean absolute error (MAE) result and improves the performance of the estimation accuracy. However, MR generates an unsteady training mode, causing a significant error term affecting accuracy performance. Some of the typical regression methods include Gaussian Process (Zhang and Yeung, <xref ref-type="bibr" rid="B49">2010</xref>), quadratic regression (Lanitis et al., <xref ref-type="bibr" rid="B25">2004</xref>), and support vector regression (Guo et al., <xref ref-type="bibr" rid="B20">2008</xref>).</p>
</sec>
<sec>
<title>3.3. Deep label distribution learning</title>
<p>Deep label distribution learning approach converts real-value age to a discrete-age distribution to fit the entire age distribution. It is an end-to-end learning model that solves the problem of insufficient training images experienced in most age estimation tasks. It relaxes the demand for a large number of training images and uneven data distribution by converting real age values to discrete age distribution to fit the whole age. The training instances connected with each class label will be increased without an increase in the number of training samples (Gao et al., <xref ref-type="bibr" rid="B19">2018</xref>; Shen et al., <xref ref-type="bibr" rid="B47">2018</xref>). However, it is usually observed that there is a lack of consistency between the employed evaluation metric and the training goals, generating an unsatisfactory result.</p>
</sec>
<sec>
<title>3.4. Ranking</title>
<p>The ranking-based algorithm uses age-axis tactics for age-class prediction and utilizes the relative order of the age. It uses relative age ranks instead of real age labels and ranks age class labels in descending order using their relevance to the presented face images to prevent making a decision for each age label that can simplify the problem (Chang et al., <xref ref-type="bibr" rid="B6">2010</xref>; Li et al., <xref ref-type="bibr" rid="B27">2012</xref>; Liu H. et al., <xref ref-type="bibr" rid="B30">2017</xref>). Nonetheless, ranking algorithms can generate suboptimal results, especially when the training objectives and the evaluating metric are inconsistent.</p>
</sec>
<sec>
<title>3.5. Hybrid</title>
<p>The hybrid algorithm can be built by combining two or more algorithms in a parallel or hierarchical manner to produce a better performance. The algorithm makes the most of the advantage of the strengths of each algorithm to obtain a more robust system (Guo et al., <xref ref-type="bibr" rid="B20">2008</xref>; Dib and El-saban, <xref ref-type="bibr" rid="B10">2010</xref>; Choi et al., <xref ref-type="bibr" rid="B7">2011</xref>). Unfortunately, combining two or more algorithms can result in a large storage overhead and computational costs, affecting its applicability in resource-constrained machines.</p>
</sec>
<sec>
<title>3.6. Summary of apparent age estimation algorithms</title>
<p>In this section, we summarized the main strengths and weaknesses of the different apparent age estimation algorithms in <xref ref-type="table" rid="T1">Table 1</xref>. Most of the existing state-of-the-art methods used MC and MR algorithms. The hybrid algorithm combines two or more algorithms, and this gives a better and more robust model compensating for the weak points in each algorithm with the strength of others. On the other hand, the ranking algorithm solves the problem peculiar to the classification algorithm by using the ordinal information of various ages and converting it into various binary classification problems. However, with DLDL, we obtained a better model by using the adjacent ages to generate label distribution for each age, even when the label distribution of the dataset is uneven. <xref ref-type="table" rid="T2">Table 2</xref> presents the performance of state-of-the-art CNN architectures with a clear distinction of the best results on each dataset for apparent age estimation when evaluated using MAE and &#x003F5;-error.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Description of state-of-the-art algorithms in apparent age estimation.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Algorithms</bold></th>
<th valign="top" align="left"><bold>Description</bold></th>
<th valign="top" align="left"><bold>Strengths</bold></th>
<th valign="top" align="left"><bold>Weaknesses</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Multi-class classification (MC)</td>
<td valign="top" align="left">A multi-class classification method considers the age value as an independent category and later learns a classifier for the age classification task; It neglects the internal relationships between those age values.</td>
<td valign="top" align="left">&#x02022;MC maximizes the expectation of the ground-truth set without considering other classes. <break/>&#x02022;It also presents age value as a separate category and later learns a classifier for age estimation.</td>
<td valign="top" align="left">&#x02022;MC algorithm can easily lead to over-fitting due to imbalances problem among the age classes and insufficient training images. <break/>&#x02022;The method can also lead to unstable training.</td>
</tr>
<tr>
<td valign="top" align="left">Metric Regression (MR)</td>
<td valign="top" align="left">MR models the relationship between a set of features and a continuous target variable. It makes some predictions from data by learning the relationship between the features of the data and some continuous-valued answers.</td>
<td valign="top" align="left">&#x02022;MR minimizes the MAE; The smaller the MAE, the better the performance of an estimator.</td>
<td valign="top" align="left">&#x02022;Some outliers in the input data can cause a large error term, leading to an unstable training process and producing an unsatisfactory performance. <break/>&#x02022;MR presents the age category as a linearly growing dependence rather than displaying the diversity of the aging process.</td>
</tr>
<tr>
<td valign="top" align="left">Deep Label Distribution Learning (DLDL)</td>
<td valign="top" align="left">DLDL is an end-to-end learning model that solves the problem of insufficient training images experienced in most age estimation tasks. It relaxes the demand for a large number of training images and uneven data distribution by converting real age values to discrete age distribution to fit the whole age distribution.</td>
<td valign="top" align="left">&#x02022;DLDL overcome the uneven age label distribution problem by converting real age value to discrete age distribution to fit the whole age distribution. <break/>&#x02022;DLDL also ease the necessity for a large number of images during training.</td>
<td valign="top" align="left">&#x02022;DLDL may be suboptimal. <break/>&#x02022;There might be an inconsistency during the training stage.</td>
</tr>
<tr>
<td valign="top" align="left">Ranking</td>
<td valign="top" align="left">Ranking uses the age-axis approach for age estimation. It employs the relative order of age to solve the classification bias problem caused by the dataset&#x00027;s unevenness of the dataset&#x00027;s sample images.</td>
<td valign="top" align="left">&#x02022;The ranking model employs an age-axis approach that uses the relative order of age for the age classification. <break/>&#x02022;The algorithm also transforms age estimation into a series of binary classification problems where the output of the rankers is aggregated directly from those binary outputs for the classification.</td>
<td valign="top" align="left">&#x02022;Ranking produces inconsistency in the training objectives and evaluation metric. <break/>&#x02022;Ranking method may be suboptimal at times.</td>
</tr>
<tr>
<td valign="top" align="left">Hybrid</td>
<td valign="top" align="left">A hybrid algorithm combines two or more modeling methods in a parallel or hierarchical manner to produce a better performance. It makes the most of the advantage of the strengths of each technique employed and is exacted to not only defeat other individual approaches but also make it robust.</td>
<td valign="top" align="left">&#x02022;A hybrid model makes the most of the advantage of the strengths of each technique used, and it is expected to outperform other individual approaches. <break/>&#x02022;Hybrid also produces a robust classifier.</td>
<td valign="top" align="left">&#x02022;Combining two or more models might result in storage overhead and a huge computational cost. <break/>&#x02022;It might be hard to deploy a hybrid model on resource-constrained devices.</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Description of state-of-the-art convolutional neural network (CNN) architectures in apparent age estimation.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>References</bold></th>
<th valign="top" align="left"><bold>Approach</bold></th>
<th valign="top" align="left"><bold>Pre-trained models</bold></th>
<th valign="top" align="left"><bold>Algorithm</bold></th>
<th valign="top" align="left"><bold>Dataset</bold></th>
<th valign="top" align="left"><bold>External data</bold></th>
<th valign="top" align="left"><bold>&#x003F5;-Score</bold></th>
<th valign="top" align="left"><bold>MAE</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Rothe et al. (<xref ref-type="bibr" rid="B44">2015</xref>)</td>
<td valign="top" align="left">20 CNN networks model</td>
<td valign="top" align="left">VGG</td>
<td valign="top" align="left">MC</td>
<td valign="top" align="left">LAP-2015</td>
<td valign="top" align="left">ImageNet; IMDb; WIKI;</td>
<td valign="top" align="left">0.278</td>
<td valign="top" align="left">3.221</td>
</tr>
<tr>
<td valign="top" align="left">Liu et al. (<xref ref-type="bibr" rid="B34">2015</xref>)</td>
<td valign="top" align="left">Real-value &#x0002B; Gaussian label distribution</td>
<td valign="top" align="left">GoogleNet</td>
<td valign="top" align="left">Hybrid</td>
<td valign="top" align="left">LAP-2015</td>
<td valign="top" align="left">CASIA-WebFace; CACD; WebFaceAge; MORPH-II;</td>
<td valign="top" align="left">0.2872</td>
<td valign="top" align="left">3.3345</td>
</tr>
<tr>
<td valign="top" align="left">Zhu et al. (<xref ref-type="bibr" rid="B51">2015</xref>)</td>
<td valign="top" align="left">Multiple models &#x0002B; RF &#x0002B; SVR</td>
<td valign="top" align="left">GoogLeNet</td>
<td valign="top" align="left">MR</td>
<td valign="top" align="left">LAP-2015</td>
<td valign="top" align="left">CASIA-WebFace; Adience; MORPH-II; FGNET; Lifespan; CACD; Private;</td>
<td valign="top" align="left">0.295</td>
<td valign="top" align="left">-</td>
</tr>
<tr>
<td valign="top" align="left">Ranjan et al. (<xref ref-type="bibr" rid="B43">2015</xref>)</td>
<td valign="top" align="left">DCNN-H-3NNR(Gaussian loss)</td>
<td valign="top" align="left">DCNN</td>
<td valign="top" align="left">MR</td>
<td valign="top" align="left">LAP-2015</td>
<td valign="top" align="left">CASIA-WebFace; Adience; MORPH-II;</td>
<td valign="top" align="left">0.373</td>
<td valign="top" align="left">-</td>
</tr>
<tr>
<td valign="top" align="left">Huo et al. (<xref ref-type="bibr" rid="B23">2016</xref>)</td>
<td valign="top" align="left">KL divergence &#x0002B; Softmax</td>
<td valign="top" align="left">VGG-16 &#x0002B; new CNN</td>
<td valign="top" align="left">DLDL</td>
<td valign="top" align="left">LAP-2015</td>
<td valign="top" align="left">MORPH-II; FG-Net; Adience; Web;</td>
<td valign="top" align="left">0.3057</td>
<td valign="top" align="left">-</td>
</tr>
<tr>
<td valign="top" align="left">Malli et al. (<xref ref-type="bibr" rid="B36">2016</xref>)</td>
<td valign="top" align="left">Ensemble of 3 CNNs</td>
<td valign="top" align="left">VGG-16</td>
<td valign="top" align="left">MC</td>
<td valign="top" align="left">LAP-2016</td>
<td valign="top" align="left">IMDb-WIKI;</td>
<td valign="top" align="left">0.3668</td>
<td valign="top" align="left">-</td>
</tr>
<tr>
<td valign="top" align="left">Antipov et al. (<xref ref-type="bibr" rid="B3">2016</xref>)</td>
<td valign="top" align="left">LDL &#x0002B; Classifcation</td>
<td valign="top" align="left">VGG-16</td>
<td valign="top" align="left">Hybrid</td>
<td valign="top" align="left">LAP-2016</td>
<td valign="top" align="left">IMDb-WIKI; Private;</td>
<td valign="top" align="left">0.241</td>
<td valign="top" align="left">-</td>
</tr>
<tr>
<td valign="top" align="left">Gurpinar et al. (<xref ref-type="bibr" rid="B21">2016</xref>)</td>
<td valign="top" align="left">VGG-Face &#x0002B; Kernel ELM</td>
<td valign="top" align="left">VGG-16</td>
<td valign="top" align="left">Hybrid</td>
<td valign="top" align="left">LAP-2016</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">0.3740</td>
<td valign="top" align="left">3.85</td>
</tr>
<tr>
<td valign="top" align="left">Liu W. et al. (<xref ref-type="bibr" rid="B33">2017</xref>)</td>
<td valign="top" align="left">GA-DFL &#x0002B; Multi-path CNN</td>
<td valign="top" align="left">VGG-16</td>
<td valign="top" align="left">MC</td>
<td valign="top" align="left">LAP-2015</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">0.369</td>
<td valign="top" align="left">4.21</td>
</tr>
<tr>
<td valign="top" align="left">Ranjan et al. (<xref ref-type="bibr" rid="B42">2017</xref>)</td>
<td valign="top" align="left">Euclidean &#x0002B; Gaussian loss functions</td>
<td valign="top" align="left">Novel CNN</td>
<td valign="top" align="left">MR</td>
<td valign="top" align="left">LAP-2015</td>
<td valign="top" align="left">MORPH-II; IMDb-WIKI; Adience;</td>
<td valign="top" align="left">0.293</td>
<td valign="top" align="left">-</td>
</tr>
<tr>
<td valign="top" align="left">Gao et al. (<xref ref-type="bibr" rid="B18">2017</xref>)</td>
<td valign="top" align="left">VGG-Face &#x0002B; DLDL(KL loss function)</td>
<td valign="top" align="left">ZF-Net &#x0002B; VGG-Net</td>
<td valign="top" align="left">DLDL</td>
<td valign="top" align="left">LAP-2015</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">0.31</td>
<td valign="top" align="left">3.51</td>
</tr>
<tr>
<td valign="top" align="left">Agustsson et al. (<xref ref-type="bibr" rid="B2">2017</xref>)</td>
<td valign="top" align="left">Residual DEX</td>
<td valign="top" align="left">VGG-16</td>
<td valign="top" align="left">MC</td>
<td valign="top" align="left">APPA-REAL</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">4.082</td>
</tr>
<tr>
<td valign="top" align="left">Gao et al. (<xref ref-type="bibr" rid="B19">2018</xref>)</td>
<td valign="top" align="left">LDL &#x0002B; Expectation Regression</td>
<td valign="top" align="left">ThinAgeNet</td>
<td valign="top" align="left">Hybrid</td>
<td valign="top" align="left">LAP-2015; LAP-2016;</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">0.272; 0.267</td>
<td valign="top" align="left">3.135; 3.452</td>
</tr>
<tr>
<td valign="top" align="left">Duan et al. (<xref ref-type="bibr" rid="B12">2018a</xref>)</td>
<td valign="top" align="left">CNN &#x0002B; ELM</td>
<td valign="top" align="left">AgeNet</td>
<td valign="top" align="left">Hybrid</td>
<td valign="top" align="left">LAP-2016</td>
<td valign="top" align="left">ImageNet; IMDb-WIKI; MORPH-II;</td>
<td valign="top" align="left">0.3679</td>
<td valign="top" align="left">-</td>
</tr>
<tr>
<td valign="top" align="left">Rothe et al. (<xref ref-type="bibr" rid="B45">2018</xref>)</td>
<td valign="top" align="left">DEX</td>
<td valign="top" align="left">VGG-16</td>
<td valign="top" align="left">MC</td>
<td valign="top" align="left">LAP-2015</td>
<td valign="top" align="left">ImageNet; IMDb-WIKI;</td>
<td valign="top" align="left">0.282</td>
<td valign="top" align="left">3.252</td>
</tr>
<tr>
<td valign="top" align="left">Li et al. (<xref ref-type="bibr" rid="B28">2019</xref>)</td>
<td valign="top" align="left">CNN &#x0002B; BridgeNet</td>
<td valign="top" align="left">VGGNet</td>
<td valign="top" align="left">MR</td>
<td valign="top" align="left">LAP-2015</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">0.26</td>
<td valign="top" align="left">-</td>
</tr>
<tr>
<td valign="top" align="left">Liu et al. (<xref ref-type="bibr" rid="B31">2019</xref>)</td>
<td valign="top" align="left">ODL (cross-entropy)</td>
<td valign="top" align="left">VGGNet</td>
<td valign="top" align="left">Ranking</td>
<td valign="top" align="left">LAP-2016</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">0.312</td>
<td valign="top" align="left">-</td>
</tr>
<tr>
<td valign="top" align="left">Deng et al. (<xref ref-type="bibr" rid="B9">2021</xref>)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">VGG-16 &#x0002B; ResNet &#x0002B; GoogLeNet &#x0002B; AlexNet</td>
<td valign="top" align="left">Hybrid</td>
<td valign="top" align="left">LAP-2016</td>
<td valign="top" align="left">FGNET; MORPH</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">2.94; 2.97</td>
</tr>
<tr>
<td valign="top" align="left">Zhao et al. (<xref ref-type="bibr" rid="B50">2022</xref>)</td>
<td valign="top" align="left">adaptive mean residue loss</td>
<td valign="top" align="left">VGG-16 &#x0002B; ResNet50</td>
<td valign="top" align="left">DLDL</td>
<td valign="top" align="left">LAP-2016</td>
<td valign="top" align="left">FGNET</td>
<td valign="top" align="left">0.3882</td>
<td valign="top" align="left">3.61</td>
</tr>
<tr>
<td valign="top" align="left">Kj&#x000E6;rran et al. (<xref ref-type="bibr" rid="B24">2021</xref>)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">AgeNet</td>
<td valign="top" align="left">MC</td>
<td valign="top" align="left">APPA-REAL</td>
<td valign="top" align="left">UTK; IMDb</td>
<td valign="top" align="left">0.3882</td>
<td valign="top" align="left">3.61</td>
</tr>
<tr>
<td valign="top" align="left">Kj&#x000E6;rran et al. (<xref ref-type="bibr" rid="B24">2021</xref>)</td>
<td valign="top" align="left">Gabor feature fusion &#x0002B; PCA &#x0002B; SVM &#x0002B; KPCA &#x0002B; CIAO-SA</td>
<td valign="top" align="left">Novel CNN</td>
<td valign="top" align="left">Hybrid</td>
<td valign="top" align="left">LAP-2016</td>
<td valign="top" align="left">Adience</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">2.10</td>
</tr>
<tr>
<td valign="top" align="left">Deng et al. (<xref ref-type="bibr" rid="B9">2021</xref>)</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">GenderNet; AgeNet; RaceNet</td>
<td valign="top" align="left">MR;Ranking</td>
<td valign="top" align="left">LAP-2016</td>
<td valign="top" align="left">MORPH2; FGNET</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">2.47; 2.59; 2.67</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
</sec>
<sec id="s4">
<title>4. Performance evaluation analysis of state-of-the-art methods in apparent age estimation</title>
<p>Apparent age is &#x0201C;how old a person looks like&#x0201D;? A significant amount of study has been done to extract facial features from faces to determine the apparent age of people. In Rothe et al. (<xref ref-type="bibr" rid="B44">2015</xref>) developed a classification-based solution [deep expectation (DEX)] for apparent age. The authors used the VGG-16 architecture that was initially pre-trained on ImageNet before further fine-tuning the newly collected IMDb-WIKI dataset of 500,000 faces of unconstrained images. The CNN-based model was addressed as a deep classification problem. As part of the solution, they employed an open source face detector by Mathias et al. (<xref ref-type="bibr" rid="B37">2014</xref>) to locate the face in an image before extracting the CNN predictions from an ensemble of 20 networks on the cropped face which was also fine-tuned on the LAP-2015 dataset. The developed model achieved a great result but demanded more computational cost and large storage overhead to pre-trained the model on huge datasets like ImageNet and IMDb-WIKI.</p>
<p>Liu et al. (<xref ref-type="bibr" rid="B34">2015</xref>) later presented a hybrid model (AgeNet) that fuses a regression (real-value) and classification (Gaussian label distribution) to solve the apparent age estimation task. The two models employed GoogleNet CNN to learn informative age representations after preprocessing the images using face detection, facial landmark localization, and face normalization. The models were initially pre-trained on a large-scale facial aging dataset with identity labels and then fine-tuned on another large-scale age dataset with unconstrained age label before it was fine-tuned on the training images of the original LAP-2015 with apparent age labels. The hybridized model achieved a second place position in the 2015 edition of the Chalearn Looking At People (LAP) competition. However, the employed GoogleNet 22-layers deep convolution neural network was too deep to be implemented on resource-constrained devices.</p>
<p>Zhu et al. (<xref ref-type="bibr" rid="B51">2015</xref>) studied a method that utilized the deep representations trained in a cascaded way. The approach also employed GoogleNet design, initially pre-trained with face images without age labels, then on data with chronological age labels to fine-tune the network parameters before finally fine-tuning the apparent age model on the apparent age dataset itself. The proposed approach consists of four different processes: an image pre-processing stage (face detection and landmark localization), a CNN design architecture in a cascade way, a coarse-to-fine design consisting of age grouping and local age estimators, and fused predictors. Although the model achieved an &#x003F5;-error of 0.2948 on the Chalearn LAP-2015 dataset with a third position at the 2015 edition of ChaLearn LAP challenge, it demands more computational cost and overhead to pre-train very large datasets on equally large CNN architecture.</p>
<p>In Ranjan et al. (<xref ref-type="bibr" rid="B43">2015</xref>), developed a regression-based automatic age estimation model. The approach estimates apparent age from unconstrained images using deep CNN (DCNN). The architecture consists of four steps: face detection, face alignment, feature extraction, and a 3-layer neural network regression. They employed a deep pyramid deformable part designed for the face detection phase and an ensemble of regression trees method (dlib C&#x0002B;&#x0002B; library) for the face alignment phase. However, for the feature extraction phase, they proposed a method that obtained traits from the pool5 layer of a pre-trained DCNN model without needing to re-tune the pre-trained DCNN network for face description tasks on age estimation data. For the age estimation phase, they used a 3-layer neural network regression model with the Gaussian loss function and a hierarchical learning approach to further boost the result of their work. Consequently, their approach achieved comparable results when evaluated on the LAP-2015 dataset. However, some outliers in the input data can cause a large error term, leading to an unstable training process.</p>
<p>Huo et al. (<xref ref-type="bibr" rid="B23">2016</xref>) introduced a deep CNN with distribution-based loss functions. The distributions utilized the ambiguity induced <italic>via</italic> manual labeling by learning a better model rather than using ages as the target. The method employed two different types of deep CNN models with different architectures: the first is based on the popular pre-trained VGG-16 CNN; the second is based on a different CNN architecture. The VGG-16 architecture is fine-tuned on three different datasets before finally fine-tuning the two models on the competition dataset. The fusion of the two outputs generated the final predicted ages. The method achieved an &#x003F5;-error of 0.3057 when evaluated on the LAP-2015 dataset. However, a distribution-based loss function might yield an inconsistent result during the training stage.</p>
<p>Malli et al. (<xref ref-type="bibr" rid="B36">2016</xref>) investigated an approach that used VGG-16 deep CNN models pre-trained on the IMDb-WIKI dataset and fine-tuned on the same dataset. The approach, an ensemble of deep learning methods, extracted deep features from the 7th layer (FC) of the VGG-FACE model and trained a 3-layer neural network with two hidden layers using deep features. They treated age estimation as a classification problem; as such, they assigned age labels within standard deviation boundaries as true. The approach significantly improved after fine-tuning the VGG-16 models for testing the age-shifted grouping technique. Although the designed fusion schemes with an ensemble of deep learning methods achieved a &#x003F5;-error of 0.3668 on the test set of the LAP-2016 dataset when evaluated, it can lead to overfitting if an appropriate regularization algorithm is not utilized.</p>
<p>In Antipov et al. (<xref ref-type="bibr" rid="B3">2016</xref>), the authors proposed a pre-trained VGG-16 CNN model that combined two separate models: the general and the children. The general model was initially trained on the huge IMDb-WIKI dataset for biological age estimation and then fine-tuned for the apparent age estimation task. The children model used a pre-trained VGG-16 network and trained the children (private dataset of children between 0 and 12 years old) dataset before it was also fine-tuned on the original apparent age estimation dataset. The children&#x00027;s network was fine-tuned from the general network. The method involves using separate age encoding strategies for training the general and children networks; a label distribution encoding for the general network and 0/1 classification encoding for the children network. The hybridized approach combines the strength of the two algorithms hierarchically but employs a more lighter CNN architecture that allows its applicability on a resource-constrained device.</p>
<p>Gurpinar et al. (<xref ref-type="bibr" rid="B21">2016</xref>) proposed a two-level method for apparent age estimation of facial images. They classified samples into overlapping age groups, and within each age group, apparent age is estimated with local regressors before fusing the output for the final age estimation task. The method involved three phases: face alignment, feature extraction, and model learning. They used a deformable parts model (DPM) for the face detector and a pre-trained CNN for feature extraction from already aligned images before employing Kernel extreme learning machines for classification. The method&#x00027;s effectiveness was evaluated on the ChaLearn LAP 2016 dataset, and they reported an &#x003F5;-error of 0.374 on the test set and MAE of 3.85.</p>
<p>Liu W. et al. (<xref ref-type="bibr" rid="B33">2017</xref>) proposed a group-aware deep feature learning (GA-DFL) technique for apparent age estimation. The CNN-based method acquired the needed feature descriptor directly from raw pixels of the face images. The ordinal ages were split into sets of discrete collections to learn deep feature transformations. They also designed a multi-path CNN approach to combine the corresponding information. The experimental results proved that the approach produced an excellent performance when compared with state-of-the-art methods. It was evaluated on three known face aging datasets and obtained an MAE of 3.93 (FG-NET), 3.25 (MORPH-II), and 4.21 (LAP-2015) with an error of 0.369.</p>
<p>Ranjan et al. (<xref ref-type="bibr" rid="B42">2017</xref>) presented a novel multi-purpose CNN model that concurrently solves the problem of apparent age estimation, genders recognition, face detection, pose estimation, landmark localization, smile detection, face verification, and recognition from any unconstrained face image. The approach, an improvement of the work in Ranjan et al. (<xref ref-type="bibr" rid="B43">2015</xref>), was trained in an MTL framework that develops a synergy among several face-related tasks improving the performance of each of those tasks through learning robust features for the distinct tasks. They employed multiple tasks enabling the network to learn the correlations between data from many distributions in an efficient way. They used Chalearn LAP-2015 and FG-NET datasets to model the age classifier, and it achieved a comparable result when it was evaluated on the same datasets.</p>
<p>Gao et al. (<xref ref-type="bibr" rid="B18">2017</xref>) also proposed a DLDL approach, an end-to-end deep learning design that employed label ambiguity in both feature and classifier learning. The approach prevented the network from overfitting even when an inadequate training dataset was used. They converted the label of each image into a discrete label distribution and learned the label distribution by minimizing a Kullback-Leibler divergence between the ground-truth and predicted label distributions using deep ConvNets. This is necessary to resolve the problem of ambiguous information among labels. Extensive experimental results revealed that the proposed approach for apparent age estimation performed significantly better than state-of-the-art methods on the same task when evaluated on MORPH-II and LAP-2015 datasets.</p>
<p>Agustsson et al. (<xref ref-type="bibr" rid="B2">2017</xref>) proposed a model called Residual DEX, an enhancement of DEX (Rothe et al., <xref ref-type="bibr" rid="B44">2015</xref>). The apparent age model was addressed as a classification problem considering the age value as an independent category. The idea of residual is that the error between the ground truth labels and the rough DEX estimation can be tackled with a unique model. The model learns a new regressor with the same CNN architecture in Rothe et al. (<xref ref-type="bibr" rid="B44">2015</xref>) to predict DEX residuals, and this significantly contributed to the performance of the new apparent age estimator.</p>
<p>The authors in Gao et al. (<xref ref-type="bibr" rid="B19">2018</xref>) designed a lightweight CNN architecture that collectively learned age distribution and regressed it. The CNN-based approach, ThinAgeNet, employed the compression rate of 0.5. The network was further trained on a quite small model with a compression rate of 0.25 and called it TinyAgeNet. The method combined LDL and expectation regression into a unified structure to ease the disparity between the training and evaluation stages. The proposed approach efficiently enhanced the performance of the earlier DLDL on both prediction error and inference speed for age classification. The approach&#x00027;s effectiveness was validated for real and apparent age estimation tasks using MORPH-II, LAP-2015, and LAP-2016 datasets.</p>
<p>Duan et al. (<xref ref-type="bibr" rid="B12">2018a</xref>) developed a robust age estimator which employed an ensemble structure: CNN2ELM, which includes CNN and extreme learning machine (ELM). The model updated the work presented in Duan et al. (<xref ref-type="bibr" rid="B13">2018b</xref>) and has a three-level approach, including feature extraction, age grouping using an ELM classifier, and age estimation with an ELM regressor. The model initially pre-trained on the ImageNet dataset was fine-tuned on the IMDb-WIKI and MORPH-II datasets. The experimental analysis performed on the LAP-2016 dataset outperformed the existing state-of-the-art age estimation models and achieved an &#x003F5;-error of 0.3679.</p>
<p>The deep expectation model based on VGG-16 architecture was proposed (Rothe et al., <xref ref-type="bibr" rid="B45">2018</xref>). The approach is a solution to real and apparent age estimation from a single face image without the use of facial landmarks. The DEX model was pre-trained on both ImageNet and IMDb-WIKI datasets to achieve better performance. They evaluated their method on three standard datasets, MORPH-II, FG-NET, and LAP-2015, and obtained state-of-the-art results for both real and apparent age estimations. The approach achieved an MAE of 3.09 (FG-NET), 2.68 (MORPH-II), and 6.521 (CACD), an &#x003F5;-error of 0.2650 (LAP-2015), and 64.0% (Exact), and 96.6% (1-off) accuracy on the Adience dataset outperformed the existing state-of-the-art age estimation methods. It recorded an &#x003F5;-error of 0.3679 on the LAP-2016 dataset.</p>
<p>Li et al. (<xref ref-type="bibr" rid="B28">2019</xref>) proposed a CNN-based technique called BridgeNet for real and apparent age estimation. The proposed model comprises two components: local regressors and gating networks that can jointly be learned end-to-end. The first component (local regressors) addressed heterogeneous data by partitioning the data space. In contrast, the second one (gating networks) employed a bridge-tree structure that learns the continuity-aware weights used by the local regressors. Experimental results on the MORPH II, FG-NET, and Chalearn LAP-2015 datasets show the effectiveness of the BridgeNet CNN in outperforming the state-of-the-art methods.</p>
<p>Liu et al. (<xref ref-type="bibr" rid="B31">2019</xref>) developed a method that is an extension of their work in Liu et al. (<xref ref-type="bibr" rid="B32">2018</xref>). The work is an end-to-end ordinal deep learning (ODL) framework, including two ordinal regression loss functions; square loss and cross-entropy loss. The proposed ranking-based ordinal deep feature learning (ODFL) method learns features needed for face representation directly from raw image pixels and then independently learns the procedures of feature extraction and apparent age estimation. The work was evaluated on state-of-the-art face aging datasets and achieved superior performance compared to state-of-the-art methods in apparent age estimation.</p>
<p>The work presented by Dagher and Barbara (<xref ref-type="bibr" rid="B8">2021</xref>) employed transfer learning from four pre-trained CNNs models to develop a facial age estimation model of human faces. The authors aimed to find the optimum age gap and to achieve high age estimation accuracy. The model was trained and evaluated on the FG-NET &#x0002B; MORPH datasets. These datasets were deemed suitable because it has varieties of age range from 0 to 77 years. The proposed model achieved an MAE of 2.94 and 2.97 on FGNET and MORPH datasets, respectively.</p>
<p>Recently, Zhao et al. (<xref ref-type="bibr" rid="B50">2022</xref>) proposed an adaptive mean-residue loss effective for facial age estimation. The proposed mean loss can penalize the age probabilities between the estimated age distribution&#x00027;s mean and the apparent age. Experiments for the model were performed on popular facial datasets (FGNET and CLAP2016) and evaluated using the MAE and &#x003F5;-error to achieve 3.61 and 0.3882, respectively. The experimental results show superior performance on both datasets when applied to state-of-the-art models (VGG-16, ResNet-50) compared to the existing mean-variance loss methods. The authors conclude with some recommendations that can improve the performance of their model.</p>
<p>A deep learning model that comprises five convolutional layers and three fully-connected layers trained from scratch was developed and presented in Kj&#x000E6;rran et al. (<xref ref-type="bibr" rid="B24">2021</xref>) for individual age estimation from facial images. The model was trained on three benchmarks (APPA, UTK, and IMDB) datasets and evaluated on separate held-out data and the Adience benchmark datasets. The experimental results show an inferior performance on the Adience dataset compared to existing models on the same dataset. In contrast, improved performance was obtained when the model was evaluated on the held-out dataset to achieve exact accuracy of 0.304% and one-off accuracy of 0.463%.</p>
<p>Deep learning models applied to facial age estimation tasks and the different data modalities employed in studying aging were presented in Ashiqur Rahman et al. (<xref ref-type="bibr" rid="B4">2021</xref>). The present study presents four broad classes of measures for quantifying algorithms&#x00027; performances concerning biological age estimation. Based on the findings, the direction for the future endeavor in the apparent age estimation research was identified with significant potential for improvement in understanding the individual&#x00027;s health status with respect to body shape, blood samples, and physical activities.</p>
<p>A hybrid facial age estimation method using the Gabor feature fusion with an atomic search algorithm for feature selection was proposed by Lu et al. (<xref ref-type="bibr" rid="B35">2022</xref>). Gabor filter with five scales and eight directions was first used to extract facial age features and then employed a histogram to carry out coding and fusion for the indices in each direction of the Gabor. An algorithm called the chaotic improved atom search optimization with simulated annealing (CIASO-SA) was then presented to improve the accuracy and the number of feature selection that is more adaptive to solving optimization problems in high dimensions. Experimental results found that Gabor achieved the best results on the 48 x 48 image resolutions to obtain one-off accuracy of 85.9%.</p>
<p>Deng et al. (<xref ref-type="bibr" rid="B9">2021</xref>) proposed a multifeatured learning and fusion method for age estimation. Three subnetworks were employed to learn age, gender, and race information. The race and gender information and the age features were concatenated to form a robust feature extraction for age estimation. These features were then converted into exact age using the regression ranking age feature estimator. Three popular benchmark datasets (MORPH2, FGNET, and LAP) were used to experiment and validate the performance efficiency of the model. The proposed model achieved an MAE of 2.47%, 2.59%, and 2.67%, respectively, for the three datasets, compared to existing methods. The model is also suitable for mobile device deployment for age estimation due to memory compactness of 20 MB only.</p>
<p>A gender-specific facial age estimation system was proposed by Raman et al. (<xref ref-type="bibr" rid="B41">2022</xref>) to classify gender images and estimate age. The system is composed of two separate models. One of the components is meant to classify the facial images into males and females separately. The other component is made up of models that were trained independently on the male and female images. The models were trained on and evaluated on the UTK-Face dataset, while cross-validation was performed with the FGNET dataset. The experimental results report 90.86% of accuracy on the female-specific model and 89.21% of accuracy for the male-specific model.</p></sec>
<sec id="s5">
<title>5. Apparent age estimation datasets</title>
<p>We briefly highlighted three popular facial aging datasets that were widely employed in the research of apparent age estimation; Chalearn LAP-2015, Chalearn LAP-2016, and APPA-REAL datasets.</p>
<p><bold>ChaLearn LAP-2015 dataset</bold>. Escalera et al. (<xref ref-type="bibr" rid="B14">2015</xref>) was collected for the purpose of ChaLearn LAP challenge competition 2015 edition. It is the first dataset on apparent age estimation. The dataset comprised 4,699 images, with each image labeled by at least ten different users. LAP-2015 dataset is divided into three sets: 2,476 (training), 1,136 (validation), and 1,087 (testing).</p>
<p><bold>ChaLearn LAP-2016 dataset</bold>. Escalera et al. (<xref ref-type="bibr" rid="B15">2016</xref>), an extension of the LAP-2015 dataset, consists of 7,591 face images collectively labeled by different human annotators. The dataset is divided into three sets: 4,113 training, 1,500 validation, and 1,978 testing images. The testing set labels are separated from the other sets but with similar age distribution.</p>
<p><bold>APPA-REAL dataset</bold>. Agustsson et al. (<xref ref-type="bibr" rid="B2">2017</xref>) is the first state-of-the-art database with both real and apparent age annotations. The images are collected using a labeling application, crowd-sourcing data collection, data from the AgeGuess platform, and with the assistance of Amazon Mechanical Turk (AMT) workers. APPA-REAL database contains a total of 7,591 images with an age range between 0 and 95 of images of subjects taken under different conditions.</p>
<p>The sample images from these datasets are presented in <xref ref-type="fig" rid="F3">Figure 3</xref>, while sample size, subjects size, and the age range information of those datasets are presented in <xref ref-type="table" rid="T3">Table 3</xref>. <xref ref-type="fig" rid="F4">Figure 4</xref> shows facial aging datasets by the number of publications, while <xref ref-type="fig" rid="F5">Figure 5</xref> shows the performance in terms of &#x003F5;-error achieved by authors on the Chalearn LAP 2015 dataset.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Sample images from LAP-2015, LAP-2016, and APPA-REAL datasets. LAP-2015 and LAP-2016 datasets were collected purposely for the Chalearn looking at people (LAP) competition in 2015 and 2016, respectively. It contains facial images with apparent ages. APPA-REAL dataset, on the other hand, was introduced by Agustsson et al. (<xref ref-type="bibr" rid="B2">2017</xref>) and contained images with both real and apparent age annotations.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-05-1025806-g0003.tif"/>
</fig>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>A summary of apparent age estimation databases.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Database</bold></th>
<th valign="top" align="center"><bold>&#x00023;Faces</bold></th>
<th valign="top" align="center"><bold>&#x00023;Subj</bold>.</th>
<th valign="top" align="center"><bold>Range</bold></th>
<th valign="top" align="center"><bold>Age type</bold></th>
<th valign="top" align="center"><bold>Year</bold></th>
<th valign="top" align="center"><bold>In-the-wild?</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">ChaLearn (Escalera et al., <xref ref-type="bibr" rid="B14">2015</xref>)</td>
<td valign="top" align="center">4,699</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">0&#x02013;100</td>
<td valign="top" align="center">Apparent</td>
<td valign="top" align="center">2015</td>
<td valign="top" align="center">yes</td>
</tr>
<tr>
<td valign="top" align="left">ChaLearn (Escalera et al., <xref ref-type="bibr" rid="B15">2016</xref>)</td>
<td valign="top" align="center">7,591</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">-</td>
<td valign="top" align="center">Apparent</td>
<td valign="top" align="center">2016</td>
<td valign="top" align="center">yes</td>
</tr>
<tr>
<td valign="top" align="left">APPA-REAL (Agustsson et al., <xref ref-type="bibr" rid="B2">2017</xref>)</td>
<td valign="top" align="center">7,591</td>
<td valign="top" align="center">7,000&#x0002B;</td>
<td valign="top" align="center">0&#x02013;95</td>
<td valign="top" align="center">Real and Apparent</td>
<td valign="top" align="center">2017</td>
<td valign="top" align="center">yes</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Facial aging datasets by the number of publications (CNN). MORPH-II dataset has the highest number of usage in the evaluation of real-age estimation models. LAP-2015 and LAP-2016 datasets are the literature&#x00027;s most common publicly-available apparent age estimation datasets.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-05-1025806-g0004.tif"/>
</fig>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Performance, in terms of &#x003F5;-error, achieved by some authors on the Chalearn LAP-2015 dataset.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-05-1025806-g0005.tif"/>
</fig></sec>
<sec id="s6">
<title>6. Performance evaluation metrics for the state-of-the-art methods in apparent age estimation</title>
<p>The standard evaluating metrics used for facial age estimation are MAE (Onifade and Akinyemi, <xref ref-type="bibr" rid="B39">2014</xref>), cummulative score (CS) (Lin et al., <xref ref-type="bibr" rid="B29">2012</xref>), accuracy (exact and 1-off) (Levi and Hassncer, <xref ref-type="bibr" rid="B26">2015</xref>), and normal score (&#x003F5;-error) (Duan et al., <xref ref-type="bibr" rid="B12">2018a</xref>). MAE, CS, and accuracy are commonly employed for real age estimation, while MAE and &#x003F5;-error are the common evaluation metrics for apparent age estimation. MAE is defined as the average of the absolute errors between the estimated ages and the ground truth, and CS measures the performance of an estimator when the training data has images at nearly every age; exact accuracy calculates the percentage of face images that were classified into correct age and gender; the ratio of the accurate predictions to the total number of the ground-truth label while one-off accuracy measures whether the ground-truth class label matches the predicted class label or if the ground-truth label exists in the two adjacent bins.</p>
<p>Mean Absolute Error is defined mathematically by:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>M</mml:mi><mml:mi>A</mml:mi><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:mfrac><mml:mrow><mml:mo>|</mml:mo><mml:msub><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:msubsup><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup><mml:mo>|</mml:mo></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>l</italic><sub><italic>k</italic></sub> : the estimated age.<inline-formula><mml:math id="M2"><mml:msubsup><mml:mrow><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mo>*</mml:mo></mml:mrow></mml:msubsup></mml:math></inline-formula> : the ground truth age for the test image k.<italic>N</italic>: the total number of test images.</p>
<p>Normal (&#x003F5;)-error metric was employed for the purpose of the Chalearn LAP competition in 2015. It calculates the error between an estimated value and the average labeled age. &#x003F5;-error is calculated as follows:</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M3"><mml:mtable class="eqnarray" columnalign="left"><mml:mtr><mml:mtd><mml:mi>&#x003F5;</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mo>-</mml:mo><mml:mfrac><mml:mrow><mml:msup><mml:mrow><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>x</mml:mi><mml:mo>-</mml:mo><mml:mi>&#x003C3;</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow><mml:mrow><mml:mn>2</mml:mn><mml:msup><mml:mrow><mml:mi>&#x003BC;</mml:mi></mml:mrow><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:mfrac></mml:mrow></mml:msup></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where x: the estimated age.&#x003C3;: apparent age label provided for a given face image.&#x003BC;: standard deviation of all estimated ages for the given face image.</p>
<p>&#x003F5;-error not only calculates the error between the estimated value x and the averaging labeled age &#x003C3; but also considers the standard deviation of &#x003BC;. The final &#x003F5;-error result is the average overall prediction, and obviously, the lower the &#x003F5;-error, the better the estimator&#x00027;s performance.</p>
<p><xref ref-type="fig" rid="F6">Figures 6</xref>, <xref ref-type="fig" rid="F7">7</xref> present the MAE and the &#x003F5;-error performance reported by authors on the Chalearn LAP 2015 and Chalearn LAP 2016 datasets.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Performance, in terms of MAE, achieved by some authors on the Chalearn LAP-2015 dataset.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-05-1025806-g0006.tif"/>
</fig>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Performance, in terms of &#x003F5;-error, achieved by some authors on the Chalearn LAP 2016 dataset.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-05-1025806-g0007.tif"/>
</fig></sec>
<sec sec-type="discussion" id="s7">
<title>7. Discussion</title>
<p>In recent times, a study in apparent age estimation has progressed steadily in terms of performance. From the literature presented above, it was clear that no concise conclusion can be made on the best type of algorithm for the estimation task. In <xref ref-type="table" rid="T1">Table 1</xref>, we present a description of each of these algorithms employed by researchers, highlighting their strengths and weaknesses; this will help to choose the right algorithm for future work.</p>
<p>As presented in <xref ref-type="fig" rid="F4">Figure 4</xref>, it was observed that the MORPH-II dataset had been far more used for real age estimation experiments than any other datasets, and this is probably due to the more significant number of facial images, which helps algorithms to learn the many individual aging traits better. However, LAP-2015, LAP-2016, and APPA-REAL are the three publicly-available facial aging datasets with apparent age labels for apparent age estimation. In order to avoid biased, considering the peculiarities of each dataset, our analysis will be based on these three datasets. As such, we summarized in <xref ref-type="table" rid="T2">Table 2</xref>, the performance of state-of-the-art algorithms showing a clear distinction between which result is best so far on each of these datasets for apparent age estimation when evaluated using MAE and &#x003F5;-error metrics.</p>
<p>Consequently, significant observations from earlier research work in the estimation help make some justifiable conclusions. Hence, we highlighted those observations stating some of the points which we deem very important:</p>
<list list-type="bullet">
<list-item><p>It was observed that the image processing method employed for face detection, facial landmark, and face alignment has an impact on the performance of an apparent age estimator.</p></list-item>
<list-item><p>It was also important to assert that the performance of learning algorithms was determined by many factors, among which are the size and label distribution of the employed dataset, the degree of image variability, etc. The deep learning algorithms perform differently on different datasets, most likely due to the peculiarities of each dataset.</p></list-item>
<list-item><p>Data augmentation improves the performance of age estimation models, especially on unevenly distributed not-too large datasets.</p></list-item>
<list-item><p>We also observed from the literature that models pre-trained on large-scale datasets before fine-tuning on the original dataset performed better than training the model on just the original dataset.</p></list-item>
<list-item><p>From this review, we observed that MC algorithms had been the most popularly used individual algorithm for apparent age estimation on the mentioned datasets.</p></list-item>
<list-item><p>We also observed that ranking and DLDL are the most suitable algorithms for the estimation when the label distribution of the dataset is uneven.</p></list-item>
</list>
<p>For apparent age estimation on LAP-2015, a hybrid algorithm (LDL and expectation regression) showed the best performance on MAE and &#x003F5;-error metrics (<xref ref-type="fig" rid="F5">Figures 5</xref>, <xref ref-type="fig" rid="F6">6</xref>). However, on the LAP-2016 dataset (in <xref ref-type="fig" rid="F7">Figure 7</xref>), a hybridized algorithm combining label distribution and classification algorithm also presented the best performance. The choice of any of these algorithms continues to be debated in the facial aging research community, which is evident in <xref ref-type="table" rid="T2">Table 2</xref>. This demonstrates that no approach may be individually suitable for apparent age estimation; instead, an MC, DLDL, or hybrid (the combination of DLDL with another algorithm) seem more effective and consistent.</p></sec>
<sec id="s8">
<title>8. Conclusion and future directions</title>
<p>This work presents a comprehensive review and the suitability of modern algorithms for apparent age estimation focusing on those algorithms that are most popular and those that appear to have been the most successful. In this review, we based our evaluations on LAP- 2015, LAP-2016, and APPA-REAL, the most popularly-used publicly available facial aging datasets for apparent age estimation. Apparent age estimation, estimate &#x0201C;how old the person looks like&#x0201D;? A hybridized algorithm showed the best performance on LAP-2015 and LAP-2016 datasets. From this study, we could deduce that MC was the most popularly used individual algorithm. We also asserted that the performance of these age estimation algorithms is mostly influenced, not only by choice of the approach but also by some other factors, among which are the method of image pre-processing applied, size and the distributions of the employed datasets, etc.</p>
<p>However, many encouraging future directions in apparent age estimations may improve performance. Huge datasets with apparent age label annotation rather than real age will help improve the research accuracy of apparent age estimation. Also, it is necessary to investigate further study that focuses more on predicting apparent age (how old does the person look?) rather than a person&#x00027;s biological age to enhance the practical and real-world applications of this research.</p></sec>
<sec id="s9">
<title>Author contributions</title>
<p>All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p></sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p></sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abbas</surname> <given-names>A. R.</given-names></name> <name><surname>Kareem</surname> <given-names>A. R.</given-names></name></person-group> (<year>2018</year>). <article-title>Intelligent age estimation from facial images using machine learning techniques</article-title>. <source>Iraqi J. Scie</source>. <volume>59</volume>, <fpage>724</fpage>&#x02013;<lpage>732</lpage>. <pub-id pub-id-type="doi">10.24996/ijs.2018.59.2A.10</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Agustsson</surname> <given-names>E.</given-names></name> <name><surname>Timofte</surname> <given-names>R.</given-names></name> <name><surname>Escalera</surname> <given-names>S.</given-names></name> <name><surname>Baro</surname> <given-names>X.</given-names></name> <name><surname>Guyon</surname> <given-names>I.</given-names></name> <name><surname>Rothe</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Apparent and real age estimation in still images with deep residual regressors on appa-real database,&#x0201D;</article-title> in <source>Proceedings - 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017- 1st International Workshop on Adaptive Shot Learning for Gesture Understanding and Production, ASL4GUP 2017</source> (<publisher-loc>Washington, DC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>87</fpage>&#x02013;<lpage>94</lpage>.</citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Antipov</surname> <given-names>G.</given-names></name> <name><surname>Baccouche</surname> <given-names>M.</given-names></name> <name><surname>Berrani</surname> <given-names>S. A.</given-names></name> <name><surname>Dugelay</surname> <given-names>J. L.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Apparent age estimation from face images combining general and children-specialized deep learning models,&#x0201D;</article-title> in <source>IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops</source> (<publisher-loc>Las Vegas, NV</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>801</fpage>&#x02013;<lpage>809</lpage>.</citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ashiqur Rahman</surname> <given-names>S.</given-names></name> <name><surname>Giacobbi</surname> <given-names>P.</given-names></name> <name><surname>Pyles</surname> <given-names>L.</given-names></name> <name><surname>Mullett</surname> <given-names>C.</given-names></name> <name><surname>Doretto</surname> <given-names>G.</given-names></name> <name><surname>Adjeroh</surname> <given-names>D. A.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep learning for biological age estimation</article-title>. <source>Brief. Bioinform</source>. <volume>22</volume>, <fpage>1767</fpage>&#x02013;<lpage>1781</lpage>. <pub-id pub-id-type="doi">10.1093/bib/bbaa021</pub-id><pub-id pub-id-type="pmid">32363395</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bouchrika</surname> <given-names>I.</given-names></name> <name><surname>Harrati</surname> <given-names>N.</given-names></name> <name><surname>Ladjailia</surname> <given-names>A.</given-names></name> <name><surname>Khedairia</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Age estimation from facial images based on hierarchical feature selection,&#x0201D;</article-title> in <source>16th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering, STA 2015</source> (<publisher-loc>Monastir</publisher-loc>), <fpage>393</fpage>&#x02013;<lpage>397</lpage>.</citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chang</surname> <given-names>K. Y.</given-names></name> <name><surname>Chen</surname> <given-names>C. S.</given-names></name> <name><surname>Hung</surname> <given-names>Y. P.</given-names></name></person-group> (<year>2010</year>). <article-title>&#x0201C;A ranking approach for human age estimation based on face images,&#x0201D;</article-title> in <source>Proceedings-International Conference on Pattern Recognition</source> (<publisher-loc>Istanbul</publisher-loc>), <fpage>3396</fpage>&#x02013;<lpage>3399</lpage>.<pub-id pub-id-type="pmid">25576566</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Choi</surname> <given-names>S. E.</given-names></name> <name><surname>Lee</surname> <given-names>Y. J.</given-names></name> <name><surname>Lee</surname> <given-names>S. J.</given-names></name> <name><surname>Park</surname> <given-names>K. R.</given-names></name> <name><surname>Kim</surname> <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Age estimation using a hierarchical classifier based on global and local facial features</article-title>. <source>Pattern Recognit</source>. <volume>44</volume>, <fpage>1262</fpage>&#x02013;<lpage>1281</lpage>. <pub-id pub-id-type="doi">10.1016/j.patcog.2010.12.005</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dagher</surname> <given-names>I.</given-names></name> <name><surname>Barbara</surname> <given-names>D.</given-names></name></person-group> (<year>2021</year>). <article-title>Facial age estimation using pre-trained cnn and transfer learning</article-title>. <source>Multimed. Tools Appl</source>. <volume>80</volume>, <fpage>20369</fpage>&#x02013;<lpage>20380</lpage>. <pub-id pub-id-type="doi">10.1007/s11042-021-10739-w</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>Y.</given-names></name> <name><surname>Teng</surname> <given-names>S.</given-names></name> <name><surname>Fei</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>W.</given-names></name> <name><surname>Rida</surname> <given-names>I.</given-names></name></person-group> (<year>2021</year>). <article-title>A multifeature learning and fusion network for facial age estimation</article-title>. <source>Sensors</source> <volume>21</volume>, <fpage>4597</fpage>. <pub-id pub-id-type="doi">10.3390/s21134597</pub-id><pub-id pub-id-type="pmid">34283133</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dib</surname> <given-names>M. Y. E.</given-names></name> <name><surname>El-saban</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <source>Human Age Estimation Using Enhanced Bio-Inspired Features (EBIF)</source>. <publisher-loc>Cairo</publisher-loc>: <publisher-name>Faculty of Computers and Information, Cairo University</publisher-name>.</citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Drobnyh</surname> <given-names>K. A.</given-names></name> <name><surname>Polovinkin</surname> <given-names>A. N.</given-names></name></person-group> (<year>2017</year>). <article-title>Using supervised deep learning for human age estimation problem</article-title>. <source>Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci</source>. XLII-2/<volume>W4</volume>, <fpage>97</fpage>&#x02013;<lpage>100</lpage>. <pub-id pub-id-type="doi">10.5194/isprs-archives-XLII-2-W4-97-2017</pub-id><pub-id pub-id-type="pmid">32248132</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Duan</surname> <given-names>M.</given-names></name> <name><surname>Li</surname> <given-names>K.</given-names></name> <name><surname>Li</surname> <given-names>K.</given-names></name></person-group> (<year>2018a</year>). <article-title>An ensemble CNN2ELM for age estimation</article-title>. <source>IEEE Trans. Inf. Forensics Security</source> <volume>13</volume>, <fpage>758</fpage>&#x02013;<lpage>772</lpage>. <pub-id pub-id-type="doi">10.1109/TIFS.2017.2766583</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Duan</surname> <given-names>M.</given-names></name> <name><surname>Li</surname> <given-names>K.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Li</surname> <given-names>K.</given-names></name></person-group> (<year>2018b</year>). <article-title>A hybrid deep learning CNN-ELM for age and gender classification</article-title>. <source>Neurocomputing</source> <volume>275</volume>, <fpage>448</fpage>&#x02013;<lpage>461</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2017.08.062</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Escalera</surname> <given-names>S.</given-names></name> <name><surname>Fabian</surname> <given-names>J.</given-names></name> <name><surname>Pardo</surname> <given-names>P.</given-names></name> <name><surname>Baro</surname> <given-names>X.</given-names></name> <name><surname>Gonzalez</surname> <given-names>J.</given-names></name> <name><surname>Escalante</surname> <given-names>H. J.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>&#x0201C;ChaLearn looking at people 2015: apparent age and cultural event recognition datasets and results,&#x0201D;</article-title> in <source>Proceedings of the IEEE International Conference on Computer Vision, Vol. 2015</source> (<publisher-loc>Santiago</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>243</fpage>&#x02013;<lpage>251</lpage>.</citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Escalera</surname> <given-names>S.</given-names></name> <name><surname>Torres</surname> <given-names>M. T.</given-names></name> <name><surname>Martinez</surname> <given-names>B.</given-names></name> <name><surname>Baro</surname> <given-names>X.</given-names></name> <name><surname>Escalante</surname> <given-names>H. J.</given-names></name> <name><surname>Guyon</surname> <given-names>I.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>&#x0201C;ChaLearn looking at people and faces of the world: face analysisworkshop and challenge 2016,&#x0201D;</article-title> in <source>IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops</source> (<publisher-loc>Las Vegas, NV</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>706</fpage>&#x02013;<lpage>713</lpage>.</citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feng</surname> <given-names>S.</given-names></name> <name><surname>Lang</surname> <given-names>C.</given-names></name> <name><surname>Feng</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>T.</given-names></name> <name><surname>Luo</surname> <given-names>J.</given-names></name></person-group> (<year>2017</year>). <article-title>Human facial age estimation by cost-sensitive label ranking and trace norm regularization</article-title>. <source>IEEE Trans. Multimedia</source> <volume>19</volume>, <fpage>136</fpage>&#x02013;<lpage>148</lpage>. <pub-id pub-id-type="doi">10.1109/TMM.2016.2608786</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fu</surname> <given-names>Y.</given-names></name> <name><surname>Guo</surname> <given-names>G.</given-names></name> <name><surname>Huang</surname> <given-names>T. S.</given-names></name></person-group> (<year>2010</year>). <article-title>Age synthesis and estimation via faces: a survey</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell</source>. <volume>32</volume>, <fpage>1955</fpage>&#x02013;<lpage>1976</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2010.36</pub-id><pub-id pub-id-type="pmid">20847387</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>B. B.</given-names></name> <name><surname>Xing</surname> <given-names>C.</given-names></name> <name><surname>Xie</surname> <given-names>C. W.</given-names></name> <name><surname>Wu</surname> <given-names>J.</given-names></name> <name><surname>Geng</surname> <given-names>X.</given-names></name></person-group> (<year>2017</year>). <article-title>Deep label distribution learning with label ambiguity</article-title>. <source>IEEE Trans. Image Process</source>. <volume>26</volume>, <fpage>2825</fpage>&#x02013;<lpage>2838</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2017.2689998</pub-id><pub-id pub-id-type="pmid">28371776</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>B. B.</given-names></name> <name><surname>Zhou</surname> <given-names>H. Y.</given-names></name> <name><surname>Wu</surname> <given-names>J.</given-names></name> <name><surname>Geng</surname> <given-names>X.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Age estimation using expectation of label distribution learning.,&#x0201D;</article-title> in <source>IJCAI International Joint Conference on Artificial Intelligence</source> (<publisher-loc>Stockholm</publisher-loc>), <fpage>712</fpage>&#x02013;<lpage>718</lpage>.<pub-id pub-id-type="pmid">34232898</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guo</surname> <given-names>G.</given-names></name> <name><surname>Fu</surname> <given-names>Y.</given-names></name> <name><surname>Dyer</surname> <given-names>C. R.</given-names></name> <name><surname>Huang</surname> <given-names>T. S.</given-names></name></person-group> (<year>2008</year>). <article-title>Image-based human age estimation by manifold learning and locally adjusted robust regression</article-title>. <source>IEEE Trans. Image Process</source>. <volume>17</volume>, <fpage>1178</fpage>&#x02013;<lpage>1188</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2008.924280</pub-id><pub-id pub-id-type="pmid">18586625</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gurpinar</surname> <given-names>F.</given-names></name> <name><surname>Kaya</surname> <given-names>H.</given-names></name> <name><surname>Dibeklioglu</surname> <given-names>H.</given-names></name> <name><surname>Salah</surname> <given-names>A. A.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Kernel ELM and CNN based facial age estimation,&#x0201D;</article-title> in <source>IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops</source> (<publisher-loc>Las Vegas, NV</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>785</fpage>&#x02013;<lpage>791</lpage>.</citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huerta</surname> <given-names>I.</given-names></name> <name><surname>Fern&#x000E1;ndez</surname> <given-names>C.</given-names></name> <name><surname>Segura</surname> <given-names>C.</given-names></name> <name><surname>Hernando</surname> <given-names>J.</given-names></name> <name><surname>Prati</surname> <given-names>A.</given-names></name></person-group> (<year>2015</year>). <article-title>A deep analysis on age estimation</article-title>. <source>Pattern Recognit. Lett</source>. <volume>68</volume>, <fpage>239</fpage>&#x02013;<lpage>249</lpage>. <pub-id pub-id-type="doi">10.1016/j.patrec.2015.06.006</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huo</surname> <given-names>Z.</given-names></name> <name><surname>Yang</surname> <given-names>X.</given-names></name> <name><surname>Xing</surname> <given-names>C.</given-names></name> <name><surname>Zhou</surname> <given-names>Y.</given-names></name> <name><surname>Hou</surname> <given-names>P.</given-names></name> <name><surname>Lv</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>&#x0201C;Deep age distribution learning for apparent age estimation,&#x0201D;</article-title> in <source>IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops</source> (<publisher-loc>Las Vegas, NV</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>722</fpage>&#x02013;<lpage>729</lpage>.<pub-id pub-id-type="pmid">28371776</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kj&#x000E6;rran</surname> <given-names>A.</given-names></name> <name><surname>Venner&#x000F8;d</surname> <given-names>C. B.</given-names></name> <name><surname>Bugge</surname> <given-names>E. S.</given-names></name></person-group> (<year>2021</year>). <article-title>Facial age estimation using convolutional neural networks</article-title>. <source>arXiv preprint arXiv:2105.06746</source>. <pub-id pub-id-type="doi">10.48550/arXiv.2105.06746</pub-id></citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lanitis</surname> <given-names>A.</given-names></name> <name><surname>Draganova</surname> <given-names>C.</given-names></name> <name><surname>Christodoulou</surname> <given-names>C.</given-names></name></person-group> (<year>2004</year>). <article-title>Comparing different classifiers for automatic age estimation</article-title>. <source>IEEE Trans. Syst. Man Cybern. B</source> <volume>34</volume>, <fpage>621</fpage>&#x02013;<lpage>628</lpage>. <pub-id pub-id-type="doi">10.1109/TSMCB.2003.817091</pub-id><pub-id pub-id-type="pmid">15369098</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Levi</surname> <given-names>G.</given-names></name> <name><surname>Hassncer</surname> <given-names>T.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Age and gender classification using convolutional neural networks,&#x0201D;</article-title> in <source>IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops</source> (<publisher-loc>Boston, MA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>34</fpage>&#x02013;<lpage>42</lpage>.<pub-id pub-id-type="pmid">34825055</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Liu</surname> <given-names>Q.</given-names></name> <name><surname>Liu</surname> <given-names>J.</given-names></name> <name><surname>Lu</surname> <given-names>H.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;Learning ordinal discriminative features for age estimation.,&#x0201D;</article-title> in <source>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Providence, RI</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2570</fpage>&#x02013;<lpage>2577</lpage>.</citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>W.</given-names></name> <name><surname>Lu</surname> <given-names>J.</given-names></name> <name><surname>Feng</surname> <given-names>J.</given-names></name> <name><surname>Xu</surname> <given-names>C.</given-names></name> <name><surname>Zhou</surname> <given-names>J.</given-names></name> <name><surname>Tian</surname> <given-names>Q.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Bridgenet: a continuity-aware probabilistic network for age estimation,&#x0201D;</article-title> in <source>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Long Beach, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1145</fpage>&#x02013;<lpage>1154</lpage>.</citation>
</ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lin</surname> <given-names>C. T.</given-names></name> <name><surname>Li</surname> <given-names>D. L.</given-names></name> <name><surname>Lai</surname> <given-names>J. H.</given-names></name> <name><surname>Han</surname> <given-names>M. F.</given-names></name> <name><surname>Chang</surname> <given-names>J. Y.</given-names></name></person-group> (<year>2012</year>). <article-title>Automatic age estimation system for face images</article-title>. <source>Int. J. Adv. Robot. Syst</source>. <volume>9</volume>, <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.5772/52862</pub-id></citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>H.</given-names></name> <name><surname>Lu</surname> <given-names>J.</given-names></name> <name><surname>Feng</surname> <given-names>J.</given-names></name> <name><surname>Zhou</surname> <given-names>J.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Ordinal deep feature learning for facial age estimation,&#x0201D;</article-title> in <source>Proceedings-12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017</source> (<publisher-loc>Washington, DC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>157</fpage>&#x02013;<lpage>164</lpage>.<pub-id pub-id-type="pmid">28809673</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>H.</given-names></name> <name><surname>Lu</surname> <given-names>J.</given-names></name> <name><surname>Feng</surname> <given-names>J.</given-names></name> <name><surname>Zhou</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Ordinal deep learning for facial age estimation</article-title>. <source>IEEE Trans. Circ. Syst. Video Technol</source>. <volume>29</volume>, <fpage>486</fpage>&#x02013;<lpage>501</lpage>. <pub-id pub-id-type="doi">10.1109/TCSVT.2017.2782709</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>W.</given-names></name> <name><surname>Chen</surname> <given-names>L.</given-names></name> <name><surname>Chen</surname> <given-names>Y.</given-names></name></person-group> (<year>2018</year>). <article-title>Age classification using convolutional neural networks with the multi-class focal loss</article-title>. <source>IOP Conf. Ser. Mater. Sci. Eng</source>. 428, 12043. <pub-id pub-id-type="doi">10.1088/1757-899X/428/1/012043</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>W.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Liu</surname> <given-names>X.</given-names></name> <name><surname>Zeng</surname> <given-names>N.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Alsaadi</surname> <given-names>F. E.</given-names></name></person-group> (<year>2017</year>). <article-title>A survey of deep neural network architectures and their applications</article-title>. <source>Neurocomputing</source> <volume>234</volume>, <fpage>11</fpage>&#x02013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2016.12.038</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>S.</given-names></name> <name><surname>Kan</surname> <given-names>M.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Wu</surname> <given-names>S.</given-names></name> <name><surname>Liu</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>&#x0201C;AgeNet: deeply learned regressor and classifier for robust apparent age estimation,&#x0201D;</article-title> in <source>Proceedings of the IEEE International Conference on Computer Vision</source> (<publisher-loc>Santiago</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>258</fpage>&#x02013;<lpage>266</lpage>.</citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lu</surname> <given-names>D.</given-names></name> <name><surname>Wang</surname> <given-names>D.</given-names></name> <name><surname>Zhang</surname> <given-names>K.</given-names></name> <name><surname>Zeng</surname> <given-names>X.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Age estimation from facial images based on gabor feature fusion and the ciaso-sa algorithm,&#x0201D;</article-title> in <source>CAAI Transactions on Intelligence Technology</source>. <pub-id pub-id-type="doi">10.1049/cit2.12084</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Malli</surname> <given-names>R. C.</given-names></name> <name><surname>Aygun</surname> <given-names>M.</given-names></name> <name><surname>Ekenel</surname> <given-names>H. K.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Apparent age estimation using ensemble of deep learning models,&#x0201D;</article-title> in <source>IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops</source> (<publisher-loc>Las Vegas, NV</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>714</fpage>&#x02013;<lpage>721</lpage>.</citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mathias</surname> <given-names>M.</given-names></name> <name><surname>Benenson</surname> <given-names>R.</given-names></name> <name><surname>Pedersoli</surname> <given-names>M.</given-names></name> <name><surname>Van Gool</surname> <given-names>L.</given-names></name></person-group> (<year>2014</year>). <article-title>Face detection without bells and whistles</article-title>. <source>Lecture Notes Comput. Sci</source>. <volume>8692</volume>, <fpage>720</fpage>&#x02013;<lpage>735</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-10593-2_47</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Onifade</surname> <given-names>O. F.</given-names></name></person-group> (<year>2015</year>). <article-title>A review on the suitability of machine learning approaches to facial age estimation</article-title>. <source>Int. J. Modern Educ. Comput. Sci</source>. <volume>7</volume>, <fpage>17</fpage>&#x02013;<lpage>28</lpage>. <pub-id pub-id-type="doi">10.5815/ijmecs.2015.12.03</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Onifade</surname> <given-names>O. F. W.</given-names></name> <name><surname>Akinyemi</surname> <given-names>J. D.</given-names></name></person-group> (<year>2014</year>). <article-title>A GW ranking approach for facial age estimation</article-title>. <source>Egypt. Comput. Sci. J</source>. <volume>38</volume>, <fpage>63</fpage>&#x02013;<lpage>74</lpage>.</citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Padme</surname> <given-names>S. E.</given-names></name> <name><surname>Desai</surname> <given-names>P. S.</given-names></name></person-group> (<year>2015</year>). <article-title>Estimation of age from face images</article-title>. <source>Int. J. Sci. Res</source>. <volume>4</volume>, <fpage>1927</fpage>&#x02013;<lpage>1931</lpage>. <pub-id pub-id-type="doi">10.21275/v4i12.NOV152411</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Raman</surname> <given-names>V.</given-names></name> <name><surname>ELKarazle</surname> <given-names>K.</given-names></name> <name><surname>Then</surname> <given-names>P.</given-names></name></person-group> (<year>2022</year>). <article-title>Gender-specific facial age group classification using deep learning</article-title>. <source>Intell. Autom. Soft Comput</source>. <volume>34</volume>, <fpage>105</fpage>&#x02013;<lpage>118</lpage>. <pub-id pub-id-type="doi">10.32604/iasc.2022.025608</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ranjan</surname> <given-names>R.</given-names></name> <name><surname>Sankaranarayanan</surname> <given-names>S.</given-names></name> <name><surname>Castillo</surname> <given-names>C. D.</given-names></name> <name><surname>Chellappa</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;An all-in-one convolutional neural network for face analysis,&#x0201D;</article-title> in <source>Proceedings-12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017-1st International Workshop on Adaptive Shot Learning for Gesture Understanding and Production, ASL4GUP 2017</source> (<publisher-loc>Washington, DC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>17</fpage>&#x02013;<lpage>24</lpage>.</citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ranjan</surname> <given-names>R.</given-names></name> <name><surname>Zhou</surname> <given-names>S.</given-names></name> <name><surname>Chen</surname> <given-names>J. C.</given-names></name> <name><surname>Kumar</surname> <given-names>A.</given-names></name> <name><surname>Alavi</surname> <given-names>A.</given-names></name> <name><surname>Patel</surname> <given-names>V. M.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>&#x0201C;Unconstrained age estimation with deep convolutional neural networks,&#x0201D;</article-title> in <source>Proceedings of the IEEE International Conference on Computer Vision</source> (<publisher-loc>Santiago</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>351</fpage>&#x02013;<lpage>359</lpage>.<pub-id pub-id-type="pmid">28809673</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rothe</surname> <given-names>R.</given-names></name> <name><surname>Timofte</surname> <given-names>R.</given-names></name> <name><surname>Van Gool</surname> <given-names>L.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;DEX: deep expectation of apparent age from a single image,&#x0201D;</article-title> in <source>Proceedings of the IEEE International Conference on Computer Vision, Vol. 2015</source> (<publisher-loc>Santiago</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>252</fpage>&#x02013;<lpage>257</lpage>.</citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rothe</surname> <given-names>R.</given-names></name> <name><surname>Timofte</surname> <given-names>R.</given-names></name> <name><surname>Van Gool</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>Deep expectation of real and apparent age from a single image without facial landmarks</article-title>. <source>Int. J. Comput. Vis</source>. <volume>126</volume>, <fpage>144</fpage>&#x02013;<lpage>157</lpage>. <pub-id pub-id-type="doi">10.1007/s11263-016-0940-3</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ruiz-Del-Solar</surname> <given-names>J.</given-names></name> <name><surname>Verschae</surname> <given-names>R.</given-names></name> <name><surname>Correa</surname> <given-names>M.</given-names></name></person-group> (<year>2009</year>). <article-title>Recognition of faces in unconstrained environments: a comparative study</article-title>. <source>EURASIP J. Adv. Signal Process</source>. <volume>2009</volume>, <fpage>1</fpage>&#x02013;<lpage>5</lpage>. <pub-id pub-id-type="doi">10.1155/2009/184617</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shen</surname> <given-names>W.</given-names></name> <name><surname>Guo</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Zhao</surname> <given-names>K.</given-names></name> <name><surname>Wang</surname> <given-names>B.</given-names></name> <name><surname>Yuille</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Deep regression forests for age estimation,&#x0201D;</article-title> in <source>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Salt Lake City, UT</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2304</fpage>&#x02013;<lpage>2313</lpage>.</citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Voelkle</surname> <given-names>M. C.</given-names></name> <name><surname>Ebner</surname> <given-names>N. C.</given-names></name> <name><surname>Lindenberger</surname> <given-names>U.</given-names></name> <name><surname>Riediger</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <article-title>Let me guess how old you are: effects of age, gender, and facial expression on perceptions of age</article-title>. <source>Psychol. Aging</source> <volume>27</volume>, <fpage>265</fpage>&#x02013;<lpage>277</lpage>. <pub-id pub-id-type="doi">10.1037/a0025065</pub-id><pub-id pub-id-type="pmid">21895379</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Yeung</surname> <given-names>D. Y.</given-names></name></person-group> (<year>2010</year>). <article-title>&#x0201C;Multi-task warped Gaussian process for personalized age estimation,&#x0201D;</article-title> in <source>Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>San Francisco, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2622</fpage>&#x02013;<lpage>2629</lpage>.</citation>
</ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>Z.</given-names></name> <name><surname>Qian</surname> <given-names>P.</given-names></name> <name><surname>Hou</surname> <given-names>Y.</given-names></name> <name><surname>Zeng</surname> <given-names>Z.</given-names></name></person-group> (<year>2022</year>). <article-title>Adaptive mean-residue loss for robust facial age estimation</article-title>. <source>arXiv preprint arXiv:2203.17156</source>. <pub-id pub-id-type="doi">10.1109/ICME52920.2022.9859703</pub-id><pub-id pub-id-type="pmid">27295638</pub-id></citation></ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Mu</surname> <given-names>G.</given-names></name> <name><surname>Guo</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;A study on apparent age estimation,&#x0201D;</article-title> in <source>Proceedings of the IEEE International Conference on Computer Vision</source> (<publisher-loc>Santiago</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>267</fpage>&#x02013;<lpage>273</lpage>.</citation>
</ref>
</ref-list> 
</back>
</article>