<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="editorial" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Robot. AI</journal-id>
<journal-title>Frontiers in Robotics and AI</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Robot. AI</abbrev-journal-title>
<issn pub-type="epub">2296-9144</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">712521</article-id>
<article-id pub-id-type="doi">10.3389/frobt.2021.712521</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Robotics and AI</subject>
<subj-group>
<subject>Editorial</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Editorial: Artificial Intelligence and Human Movement in Industries and Creation</article-title>
<alt-title alt-title-type="left-running-head">Dimitropoulos et&#x20;al.</alt-title>
<alt-title alt-title-type="right-running-head">Editorial: AIMOVE in Industry and Creations</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Dimitropoulos</surname>
<given-names>Kosmas</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/704434/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Daras</surname>
<given-names>Petros</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/705655/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Manitsaris</surname>
<given-names>Sotiris</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/703905/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Fol Leymarie</surname>
<given-names>Frederic</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/39943/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Calinon</surname>
<given-names>Sylvain</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/226866/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<label>
<sup>1</sup>
</label>Information Technologies Institute, Centre for Research and Technology Hellas, <addr-line>Thessaloniki</addr-line>, <country>Greece</country>
</aff>
<aff id="aff2">
<label>
<sup>2</sup>
</label>Centre for Robotics, MINES ParisTech, PSL Universit&#xe9; Paris, <addr-line>Paris</addr-line>, <country>France</country>
</aff>
<aff id="aff3">
<label>
<sup>3</sup>
</label>Department of Computing, Goldsmiths University of London, <addr-line>London</addr-line>, <country>United&#x20;Kingdom</country>
</aff>
<aff id="aff4">
<label>
<sup>4</sup>
</label>Idiap Research Institute, <addr-line>Martigny</addr-line>, <country>Switzerland</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited and reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/127451/overview">Astrid Marieke Rosenthal-von Der P&#xfc;tten</ext-link>, RWTH Aachen University, Germany</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Kosmas Dimitropoulos, <email>dimitrop@iti.gr</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Human-Robot Interaction, a section of the journal Frontiers in Robotics and&#x20;AI</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>12</day>
<month>07</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>8</volume>
<elocation-id>712521</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>05</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>28</day>
<month>06</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2021 Dimitropoulos, Daras, Manitsaris, Fol Leymarie and Calinon.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Dimitropoulos, Daras, Manitsaris, Fol Leymarie and Calinon</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<related-article id="RA1" related-article-type="commentary-article" xlink:href="https://www.frontiersin.org/research-topics/10122" ext-link-type="uri">Editorial on the Research Topic <article-title>Artificial Intelligence and Human Movement in Industries and Creation</article-title>
</related-article>
<kwd-group>
<kwd>Artificial intelligence</kwd>
<kwd>human motion analysis</kwd>
<kwd>human centred</kwd>
<kwd>machine learning</kwd>
<kwd>motion caption</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<p>Recent advances in human motion sensing technologies and machine learning have enhanced the potential of Artificial Intelligence to improve our quality of life, increase productivity and reshape multiple industries, including cultural and creative industries. In order to achieve this goal, humans must remain at the center of Artificial Intelligence and AI should learn from humans and collaborate effectively with them. Human-Centred Artificial Intelligence (HAI) is expected to create new opportunities and challenges in the future, which cannot yet be foreseen. Any type of programmable entity (e.g., robots, computers, autonomous vehicles, drones, Internet of Things, etc.) will have different layers of perception and sophisticated HAI algorithms that will detect human intentions and behaviors (<xref ref-type="bibr" rid="B13">Psaltis et&#x20;al., 2017</xref>) and learn continuously from them. Thus, every single intelligent system will be able to capture human motions, analyze them (<xref ref-type="bibr" rid="B17">Zhang et&#x20;al., 2019</xref>), detect poses and recognize gestures (<xref ref-type="bibr" rid="B3">Chatzis et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B15">Stergioulas et&#x20;al., 2021</xref>) and activities (<xref ref-type="bibr" rid="B12">Papastratis et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B11">Papastratis et&#x20;al., 2021</xref>; <xref ref-type="bibr" rid="B9">Konstantinidis et&#x20;al., 2021</xref>), including facial expressions and gaze (<xref ref-type="bibr" rid="B2">Bek et&#x20;al., 2020</xref>), enabling natural collaboration with humans.</p>
<p>Different sensing technologies, such as optical Mocap systems, wearable inertial sensors, RGB or depth cameras and other modality type sensors, are employed for capturing human movement in the scene and transforming this information into a digital representation. Most of the researchers usually focus on the use of a single modality sensor - due to the simplicity and low cost of the final system - and the design of either conventional machine learning algorithms or complex deep learning network architectures for analyzing human motion data (<xref ref-type="bibr" rid="B8">Konstantinidis et&#x20;al., 2018</xref>; <xref ref-type="bibr" rid="B10">Konstantinidis et&#x20;al., 2020</xref>). Such cost-effective approaches have been applied to a wide range of application domains, including entertainment (<xref ref-type="bibr" rid="B7">Kaza et&#x20;al., 2016</xref>; <xref ref-type="bibr" rid="B1">Baker, 2020</xref>), health (<ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fcomp.2020.00020/full">Dias et&#x20;al.</ext-link>; <xref ref-type="bibr" rid="B9">Konstantinidis et&#x20;al., 2021</xref>), education (<xref ref-type="bibr" rid="B13">Psaltis et&#x20;al., 2017</xref>; <xref ref-type="bibr" rid="B14">Stefanidis et&#x20;al., 2019</xref>), sports (<xref ref-type="bibr" rid="B16">Tisserand et&#x20;al., 2017</xref>), robotics (<xref ref-type="bibr" rid="B6">Jaquier et&#x20;al., 2020</xref>; <xref ref-type="bibr" rid="B5">Gao et&#x20;al., 2021</xref>), art and cultural heritage (<xref ref-type="bibr" rid="B4">Dimitropoulos et&#x20;al., 2018</xref>), showing the great potential of AI technology.</p>
<p>Based on the aforementioned, it is evident that HAI is currently at the center of scientific debates and technological exhibitions. Developing and deploying intelligent machines is definitely both an economic challenge (e.g., flexibility, simplification, ergonomy) as well as a societal challenge (e.g., safety, transparency), not only from a factory perspective, but also for the real-world in general. The papers in this Research Topic adopt different sensing technologies, such as depth sensors, inertial suits, IMU sensors and force-sensing resistors (FSRs) to capture human movement, while they present diverse approaches for modeling the temporal&#x20;data.</p>
<p>More specifically, <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/frobt.2019.00120/full">Sakr et&#x20;al.</ext-link> investigate the feasibility of employing FSRs worn on the arm to measure the Force Myography (FMG) signals for isometric force/torque estimation. A two-stage regression strategy is employed to enhance the performance of the FMG bands, where three regression algorithms including general regression neural network (GRNN), support vector regression (SVR), and random forest regression (RF) models are used, respectively, in the first stage, while GRNN is used in the second stage. Two cases are considered to explore the performance of the FMG bands in estimating: (a) 3-DoF force and 3-DoF torque at once and (b) 6-DoF force and torque. In addition, the impact of sensor placement and the spatial coverage of FMG measurements is studied.</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/frobt.2020.00080/full">Manitsaris et&#x20;al.</ext-link> propose a multivariate time series approach for the recognition of professional gestures and for the forecasting of their trajectories. More specifically, the authors introduce a gesture operational model, which describes how gestures are performed based on assumptions that focus on the dynamic association of body entities, their synergies, and their serial and non-serial mediations, as well as their transitioning over time from one state to another. The assumptions of this model are then translated into an equation system for each body entity through State-Space modeling. The proposed method is evaluated on four industrial datasets that contain gestures, commands and actions.</p>
<p>A comprehensive review on machine learning approaches for motor learning is presented by <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fcomp.2020.00016/full">Caramiaux et&#x20;al.</ext-link> The review outlines existing machine learning models for motor learning and their adaptation capabilities and identifies three types of adaptation: Parameter adaptation in probabilistic models, transfer and meta-learning in deep neural networks, and planning adaptation by reinforcement learning.</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fcomp.2020.00020/full">Dias et&#x20;al.</ext-link> present an innovative and personalized motor assessment tool capable of monitoring and tracking the behavioral change of Parkinson&#x2019;s disease (PD) patients (mostly related to posture, walking/gait, agility, balance, and coordination impairments). The proposed assessment tool is part of the i-Prognosis Game Suit, which was developed within the framework of the i-Prognosis EU funded project (<ext-link ext-link-type="uri" xlink:href="http://www.i-prognosis.eu">www.i-prognosis.eu</ext-link>). Six different motor assessments tests integrated in the iPrognosis Games have been designed and developed based on the UPDRS Part III examination. The efficiency of the proposed assessment tests to reflect the motor skills status, similarly to the UPDRS Part III items, is validated via 27 participants with early and moderate&#x20;PD.</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/frobt.2021.537384/full">Bikias et&#x20;al.</ext-link> explore the use of IMU sensors for the detection of Freezing-of-Gait (FoG) Episodes in Parkinson&#x2019;s disease Patients and present a novel deep learning method. The study investigates the feasibility of a single wrist-based inertial measurement unit (IMU) for effectively predicting FoG events. The proposed method, namely, DeepFoG, aims at facilitating the real-time detection of FoG episodes. DeepFoG is based on the training of a deep learning model that automatically detects FoG events and differentiates them from stops and walking with turns. DeepFoG, utilizing a single-arm sensor has the potential to achieve similar accuracy as previously published methods, but with fewer sensors. The main advantage offered by the proposed methodology is its simplification and convenience attributed to the use of a single smartwatch rather than its improved accuracy.</p>
<p>The approaches discussed in this Research Topic offer readers a wide range of valuable paradigms that promote the use of AI and Human Movement Analysis in different application domains and at the same time provide rich material for scientific thinking.</p>
</body>
<back>
<sec id="s1">
<title>Author Contributions</title>
<p>KD and SM wrote the first draft. All authors contributed to manuscript revision.</p>
</sec>
<sec sec-type="COI-statement" id="s2">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Baker</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2020</year>). <source>The History of Motion Capture within the Entertainment Industry</source>. <publisher-loc>Helsinki, Finland</publisher-loc>: <publisher-name>Metropolia University of Applied Sciences (Thesis)</publisher-name>. <pub-id pub-id-type="doi">10.1109/vr46266.2020.00102</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bek</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Poliakoff</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Lander</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Measuring Emotion Recognition by People with Parkinson&#x2019;s Disease Using Eye-Tracking with Dynamic Facial Expressions</article-title>. <source>J.&#x20;Neurosci. Methods</source> <volume>331</volume>, <fpage>108524</fpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2019.108524</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chatzis</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Stergioulas</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Konstantinidis</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Daras</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>A Comprehensive Study on Deep Learning-Based 3D Hand Pose Estimation Methods</article-title>. <source>Appl. Sci.</source> <volume>10</volume> (<issue>19</issue>), <fpage>6850</fpage>. <pub-id pub-id-type="doi">10.3390/app10196850</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Tsalakanidou</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Nikolopoulos</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kompatsiaris</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Grammalidis</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Manitsaris</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>A Multimodal Approach for the Safeguarding and Transmission of Intangible Cultural Heritage: The Case of I-Treasures</article-title>. <source>IEEE Intell. Syst.</source> <volume>33</volume> (<issue>6</issue>), <fpage>3</fpage>&#x2013;<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1109/mis.2018.111144858</pub-id> </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gao</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Silv&#xe9;rio</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Pignat</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Calinon</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Xiao</surname>
<given-names>X.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Motion Mappings for Continuous Bilateral Teleoperation</article-title>. <source>IEEE Robotics Automation Lett.</source> <volume>6</volume> (<issue>3</issue>), <fpage>5048</fpage>&#x2013;<lpage>5055</lpage>. <pub-id pub-id-type="doi">10.1109/LRA.2021.3068924</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Jaquier</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Ginsbourger</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Calinon</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Learning from Demonstration with Model-Based Gaussian Process</article-title>,&#x201d; in <conf-name>Conference on Robot Learning</conf-name> (PMLR). Proceedings of Machine Learning Research Location, <fpage>247</fpage>&#x2013;<lpage>257</lpage>. </citation>
</ref>
<ref id="B7">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Kaza</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Psaltis</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Stefanidis</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Apostolakis</surname>
<given-names>K. C.</given-names>
</name>
<name>
<surname>Thermos</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<etal/>
</person-group> (<year>2016</year>). &#x201c;<article-title>Body Motion Analysis for Emotion Recognition in Serious Games</article-title>,&#x201d; in <conf-name>International Conference on Universal Access in Human-Computer Interaction</conf-name>. <publisher-loc>Cham</publisher-loc>: Springer, <fpage>33</fpage>&#x2013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-40244-4_4</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Konstantinidis</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Daras</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>A Deep Learning Approach for Analyzing Video and Skeletal Features in Sign Language Recognition</article-title>,&#x201d; in <conf-name>2018 IEEE International Conference on Imaging Systems and Techniques (IST)</conf-name> (IEEE), Krakow, Poland, <fpage>1</fpage>&#x2013;<lpage>6</lpage>. </citation>
</ref>
<ref id="B9">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Konstantinidis</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Daras</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>Towards Real-Time Generalized Ergonomic Risk Assessment for the Prevention of Musculoskeletal Disorders</article-title>,&#x201d; in <conf-name>14th ACM International Conference on Pervasive Technologies Related to Assistive Environments Conference</conf-name>, Corfu, Greece: Association for Computing Machinery. </citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Konstantinidis</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Langlet</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Daras</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Ioakimidis</surname>
<given-names>I.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Validation of a Deep Learning System for the Full Automation of Bite and Meal Duration Analysis of Experimental Meal Videos</article-title>. <source>Nutrients</source> <volume>12</volume> (<issue>1</issue>), <fpage>209</fpage>. <pub-id pub-id-type="doi">10.3390/nu12010209</pub-id> </citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Papastratis</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Daras</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Continuous Sign Language Recognition through a Context-Aware Generative Adversarial Network</article-title>. <source>Sensors</source> <volume>21</volume> (<issue>7</issue>), <fpage>2437</fpage>. <pub-id pub-id-type="doi">10.3390/s21072437</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Papastratis</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Konstantinidis</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Daras</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Continuous Sign Language Recognition through Cross-Modal Alignment of Video and Text Embeddings in a Joint-Latent Space</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>91170</fpage>&#x2013;<lpage>91180</lpage>. <pub-id pub-id-type="doi">10.1109/access.2020.2993650</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Psaltis</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Apostolakis</surname>
<given-names>K. C.</given-names>
</name>
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Daras</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Multimodal Student Engagement Recognition in Prosocial Games</article-title>. <source>IEEE Trans. Games</source> <volume>10</volume> (<issue>3</issue>), <fpage>292</fpage>&#x2013;<lpage>303</lpage>. <pub-id pub-id-type="doi">10.1109/tciaig.2017.2743341</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stefanidis</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Psaltis</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Apostolakis</surname>
<given-names>K. C.</given-names>
</name>
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Daras</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Learning Prosocial Skills through Multiadaptive Games: a Case Study</article-title>. <source>J.&#x20;Comput. Educ.</source> <volume>6</volume> (<issue>1</issue>), <fpage>167</fpage>&#x2013;<lpage>190</lpage>. <pub-id pub-id-type="doi">10.1007/s40692-019-00134-8</pub-id> </citation>
</ref>
<ref id="B15">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Stergioulas</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Chatzis</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Konstantinidis</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Dimitropoulos</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Daras</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2021</year>). &#x201c;<article-title>3D Hand Pose Estimation via Aligned Latent Space Injection and Kinematic Losses</article-title>,&#x201d; in <conf-name>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops</conf-name>. </citation>
</ref>
<ref id="B16">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Tisserand</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Magnenat-Thalmann</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Unzueta</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Linaza</surname>
<given-names>M. T.</given-names>
</name>
<name>
<surname>Ahmadi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>O&#x2019;Connor</surname>
<given-names>N. E.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). &#x201c;<article-title>Preservation and Gamification of Traditional Sports</article-title>,&#x201d; in <source>Mixed Reality and Gamification for Cultural Heritage</source>. (<publisher-loc>Cham, Switzerland</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>421</fpage>&#x2013;<lpage>446</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-49607-8_17</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>H.-B.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y.-X.</given-names>
</name>
<name>
<surname>Zhong</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Lei</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Yang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Du</surname>
<given-names>J.-X.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <article-title>A Comprehensive Survey of Vision-Based Human Action Recognition Methods</article-title>. <source>Sensors</source> <volume>19</volume> (<issue>5</issue>), <fpage>1005</fpage>. <pub-id pub-id-type="doi">10.3390/s19051005</pub-id> </citation>
</ref>
</ref-list>
</back>
</article>