<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Big Data</journal-id>
<journal-title>Frontiers in Big Data</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Big Data</abbrev-journal-title>
<issn pub-type="epub">2624-909X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fdata.2024.1359906</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Big Data</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Toward the design of persuasive systems for a healthy workplace: a real-time posture detection</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Ataguba</surname> <given-names>Grace</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2370892/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/conceptualization/"/>
<role content-type="https://credit.niso.org/contributor-roles/data-curation/"/>
<role content-type="https://credit.niso.org/contributor-roles/methodology/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Orji</surname> <given-names>Rita</given-names></name>
<role content-type="https://credit.niso.org/contributor-roles/supervision/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff><institution>Department of Computer Science, Dalhousie University</institution>, <addr-line>Halifax, NS</addr-line>, <country>Canada</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Rui Qin, Manchester Metropolitan University, United Kingdom</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Iroju Olaronke, Adeyemi College of Education, Nigeria</p>
<p>Sakib Jalil, James Cook University, Australia</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Grace Ataguba <email>grace.ataguba&#x00040;dal.ca</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>06</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>7</volume>
<elocation-id>1359906</elocation-id>
<history>
<date date-type="received">
<day>22</day>
<month>12</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>10</day>
<month>05</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2024 Ataguba and Orji.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Ataguba and Orji</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Persuasive technologies, in connection with human factor engineering requirements for healthy workplaces, have played a significant role in ensuring a change in human behavior. Healthy workplaces suggest different best practices applicable to body posture, proximity to the computer system, movement, lighting conditions, computer system layout, and other significant psychological and cognitive aspects. Most importantly, body posture suggests how users should sit or stand in workplaces in line with best and healthy practices. In this study, we developed two study phases (pilot and main) using two deep learning models: convolutional neural networks (CNN) and Yolo-V3. To train the two models, we collected posture datasets from creative common license YouTube videos and Kaggle. We classified the dataset into comfortable and uncomfortable postures. Results show that our YOLO-V3 model outperformed CNN model with a mean average precision of 92%. Based on this finding, we recommend that YOLO-V3 model be integrated in the design of persuasive technologies for a healthy workplace. Additionally, we provide future implications for integrating proximity detection taking into consideration the ideal number of centimeters users should maintain in a healthy workplace.</p></abstract>
<kwd-group>
<kwd>persuasive technology</kwd>
<kwd>healthy workplace</kwd>
<kwd>posture</kwd>
<kwd>machine learning</kwd>
<kwd>YOLO-V3</kwd>
<kwd>convolutional neural networks</kwd>
</kwd-group>
<counts>
<fig-count count="20"/>
<table-count count="5"/>
<equation-count count="0"/>
<ref-count count="125"/>
<page-count count="22"/>
<word-count count="13542"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Medicine and Public Health</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1 Introduction</title>
<p>The importance of persuasive technologies in influencing changes in human behavior is significant and cannot be overemphasized. Persuasive technologies have an impact on users&#x00027; behavior and the choices they make (Rapoport, <xref ref-type="bibr" rid="B95">2017</xref>; Orji et al., <xref ref-type="bibr" rid="B88">2018</xref>; Darioshi and Lahav, <xref ref-type="bibr" rid="B38">2021</xref>; Wang et al., <xref ref-type="bibr" rid="B118">2023</xref>). As a result, persuasive technologies prioritize user-centered design, and they can assist users in leading a healthy lifestyle. Considering this, research has demonstrated the valuable roles these technologies play in preventing and aiding the management of illnesses (Schnall et al., <xref ref-type="bibr" rid="B104">2015</xref>; Karppinen et al., <xref ref-type="bibr" rid="B59">2016</xref>; Sonntag, <xref ref-type="bibr" rid="B111">2016</xref>; Bartlett et al., <xref ref-type="bibr" rid="B16">2017</xref>; Faddoul and Chatterjee, <xref ref-type="bibr" rid="B42">2019</xref>; Fukuoka et al., <xref ref-type="bibr" rid="B44">2019</xref>; Kim M. T. et al., <xref ref-type="bibr" rid="B61">2019</xref>; Oyibo and Morita, <xref ref-type="bibr" rid="B89">2021</xref>), promoting fitness and exercise (Bartlett et al., <xref ref-type="bibr" rid="B16">2017</xref>; Schooley et al., <xref ref-type="bibr" rid="B105">2021</xref>), and other significant ones (Jafarinaimi et al., <xref ref-type="bibr" rid="B52">2005</xref>; Anagnostopoulou et al., <xref ref-type="bibr" rid="B9">2019</xref>; Beheshtian et al., <xref ref-type="bibr" rid="B18">2020</xref>).</p>
<p>The workplace, a location, setting, or environment where people engage in work, have recorded significant unhealthy practices, including bad posture, over the years (Nanthavanij et al., <xref ref-type="bibr" rid="B83">2008</xref>; Ko Ko et al., <xref ref-type="bibr" rid="B63">2020</xref>; Roy, <xref ref-type="bibr" rid="B100">2020</xref>; van de Wijdeven et al., <xref ref-type="bibr" rid="B117">2023</xref>). In the context of this study, we consider work-from-home (WFH) contexts, offices, and other spaces where computers are employed to be workplaces. Best workplace practices are significant for a healthy working style. These practices cover the need to ensure that computer users maintain the right posture, follow the right movement practices, take regular breaks from computer systems, ensure they have proper lighting conditions, adhere to computer system layout, and other significant psychological and cognitive aspects. Poor workplace practices can lead to various health issues, such as repetitive strain injuries, eyestrain, and postural problems (Ofori-Manteaw et al., <xref ref-type="bibr" rid="B86">2015</xref>; Workineh and Yamaura, <xref ref-type="bibr" rid="B121">2016</xref>; Alaydrus and Nusraningrum, <xref ref-type="bibr" rid="B7">2019</xref>). Research has shown that over 70% of stress, neck injuries, other types of sprains and pains (for example, arm sprains and back pain), and stress are work-related (Tang, <xref ref-type="bibr" rid="B115">2022</xref>). This study presents the design of a persuasive system based on the best posture practices. In addition, this study presents implications for designing persuasive systems based on their proximity to computer system requirements.</p>
<p>Machine learning, a subfield of artificial intelligence (AI), deals with developing models. These models assist computers in learning and detecting patterns of objects in the real world (Mahesh, <xref ref-type="bibr" rid="B75">2020</xref>; Sarker, <xref ref-type="bibr" rid="B101">2021</xref>). Hence, machine learning has contributed to several studies that have significantly detected patterns in human behaviors (Cheng et al., <xref ref-type="bibr" rid="B31">2017</xref>; Krishna et al., <xref ref-type="bibr" rid="B65">2018</xref>; Xu et al., <xref ref-type="bibr" rid="B123">2019</xref>; Chandra et al., <xref ref-type="bibr" rid="B28">2021</xref>; Jupalle et al., <xref ref-type="bibr" rid="B58">2022</xref>; Cob-Parro et al., <xref ref-type="bibr" rid="B33">2023</xref>), human emotions (Jaiswal and Nandi, <xref ref-type="bibr" rid="B53">2020</xref>; Gill and Singh, <xref ref-type="bibr" rid="B45">2021</xref>), and health-related behaviors (Reddy et al., <xref ref-type="bibr" rid="B96">2018</xref>; Mujumdar and Vaidehi, <xref ref-type="bibr" rid="B82">2019</xref>; Ahmad et al., <xref ref-type="bibr" rid="B2">2021</xref>). In this study, we leverage the opportunity of machine learning algorithms to design a persuasive system for detecting patterns of unhealthy postures and proximity to computers in workplaces.</p>
<p>As part of persuasive technology&#x00027;s goal to provide users with real-time feedback on their actions (which, in turn, influences their behavior), we report on our experiment comparing the convolutional neural networks (CNN) and Yolo-V3 models. Research has shown the success of these models in real-time object detection (Tan et al., <xref ref-type="bibr" rid="B113">2021</xref>; Alsanad et al., <xref ref-type="bibr" rid="B8">2022</xref>). One of the significant drawbacks of CNN compared with Yolo-V3 from research is its requirement for a large number of training sets (Han et al., <xref ref-type="bibr" rid="B48">2018</xref>). On the other hand, the Yolo-V3 model generates regions or boxes around objects and returns its accuracy values within these boxes. This implies that several boxes are marked within an object, and its performance can be implied from the confidence of predictions (<xref ref-type="fig" rid="F1">Figure 1</xref>). For example, in <xref ref-type="fig" rid="F1">Figure 1</xref>, the YOLO-V3 model predicted the hardhat with 95% confidence. Yolo-V3 and CNN work in real time by analyzing images extracted from frames per second and providing a consistent update as these images change.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>A YOLO-V3 detection on a sample image. Reproduced from &#x0201C;YOLOv3 on custom dataset,&#x0201D; YouTube, uploaded by &#x0201C;Aman Jain,&#x0201D; 22 July 2021, <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=D4RQ7Rkrass">https://www.youtube.com/watch?v=D4RQ7Rkrass</ext-link>, Permissions: YouTube <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/t/terms">Terms of Service</ext-link>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0001.tif"/>
</fig>
<p>Though we found significant studies in the application of persuasive systems to encourage computer users to take regular breaks from workplaces (Jafarinaimi et al., <xref ref-type="bibr" rid="B52">2005</xref>; Reeder et al., <xref ref-type="bibr" rid="B97">2010</xref>; Ludden and Meekhof, <xref ref-type="bibr" rid="B74">2016</xref>; Ren et al., <xref ref-type="bibr" rid="B98">2019</xref>), little is yet known about how they maintain the right posture before these regular breaks. Based on this limitation, the overarching goal of our study is to explore how people can be conscious of their unhealthy posture practices in workplaces (while sitting or standing). This connects with the main research question we seek to answer (RQ): RQ: Can we design persuasive computers to detect unhealthy posture practices (such as sitting and standing) in workplaces?</p>
<p>People in workplaces have two types of posture positions: sitting and standing (Botter et al., <xref ref-type="bibr" rid="B23">2016</xref>). The sitting position affords the computer user space to relax the back correctly on a chair (<xref ref-type="fig" rid="F2">Figure 2</xref>, L). This, compared with the standing position, allows computer users to stand while using the computer system (<xref ref-type="fig" rid="F3">Figure 3</xref>). It is significant to recall that before COVID-19, these workplaces were office spaces. However, most recently, after COVID-19, workplaces have extended to home spaces (Abdullah et al., <xref ref-type="bibr" rid="B1">2020</xref>; Javad Koohsari et al., <xref ref-type="bibr" rid="B54">2021</xref>). People now work from home, and the posture practices in these spaces have not been evaluated.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Correct ergonomics (L) and incorrect ergonomics (R) in a sitting workstation. Reproduced from &#x0201C;Computer Ergonomics,&#x0201D; YouTube, uploaded by &#x0201C;Pearls Classroom,&#x0201D; 5 October 2021, <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=XQTQ578wLzo">https://www.youtube.com/watch?v=XQTQ578wLzo</ext-link>, Permissions: YouTube <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/t/terms">Terms of Service</ext-link>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0002.tif"/>
</fig>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Edited scenes. Reproduced from &#x0201C;Libertyville IL neck pain&#x02014;prevent bad posture with the right workstation,&#x0201D; YouTube, uploaded by &#x0201C;Functional Pain Relief,&#x0201D; 22 August 2018, <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=0M5C1BJdVsA">https://www.youtube.com/watch?v=0M5C1BJdVsA</ext-link>, Permissions: YouTube <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/t/terms">Terms of Service</ext-link>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0003.tif"/>
</fig>
<p>The scientific contributions of this research are in 4-folds:</p>
<list list-type="simple">
<list-item><p>1. Provision of ground truth posture datasets:</p></list-item>
</list>
<p>We are contributing ground-truth posture datasets for the research community to explore related concepts in the future. These datasets can be increased in future work to enhance the accuracy and effectiveness of future technological interventions. Hence, this contribution will support researchers and designers in developing more robust and context-aware persuasive technologies.</p>
<list list-type="simple">
<list-item><p>2. Implementation of deep learning models for posture detection:</p></list-item>
</list>
<p>We present the development and implementation of deep learning models for detecting the posture practices of computer users. These models leverage advanced techniques to interpret and classify diverse body positions, contributing to the evolving landscape of human&#x02013;computer interaction. The models offer a technological solution to the challenge of real-time posture detection in the workplace. This contribution aligns with the forefront of research in machine learning and computer vision.</p>
<list list-type="simple">
<list-item><p>3. Real-time persuasive design for healthy workplace behavior:</p></list-item>
</list>
<p>We present a real-time persuasive design based on posture practices, thereby introducing a novel approach to promoting healthy workplace behavior. This contribution has practical implications for addressing issues related to sedentary work habits, discomfort, and potential health impacts associated with poor posture.</p>
<list list-type="simple">
<list-item><p>4. Integrating real-time feedback and persuasive elements:</p></list-item>
</list>
<p>Our design presents the potential and feasibility of persuasive technology to positively influence user behavior, fostering increased awareness and conscious efforts toward maintaining proper posture. This interdisciplinary contribution merges insights from computer science, psychology, and workplace health.</p>
<p>Collectively, these scientific contributions play a significant role in the advancement of knowledge in the fields of human&#x02013;computer interaction, machine learning, and persuasive technology, with direct applications for improving workplace wellbeing and behavior. The rest of the study is structured as follows: First, we reviewed significant scholarly works on workplace practices, user health, and productivity; persuasive technologies and the workplace; machine learning and workplace practices; and accessibility technologies and healthy practices. Second, we present the methodology based on data collection and deep learning model deployment for the pilot study and the main study. Third, we report on the results of the pilot and main studies. In addition, we compare outcomes for deploying CNN and Yolo-V3 models toward persuasive, healthy workplace designs. Fourth, we present a discussion on the results from the pilot and main studies. Fifth, we report on the limitations of the study and present design recommendations to guide future research. Sixth, we conclude by summarizing the study and drawing an inference based on the results, limitations, and recommendations for future studies.</p></sec>
<sec id="s2">
<title>2 Related work</title>
<p>This section provides an in-depth exploration of related work comparing the relationship between workplace practices, user health and productivity, and other significant ones such as persuasive technologies and workplace practices, machine learning and workplace practices, and accessibility technologies and healthy practices.</p>
<sec>
<title>2.1 Workplace practices, user health, and productivity</title>
<p>Workplace practices cover significant areas such as the proper chair and desk height, appropriate monitor placement, ergonomic keyboard and mouse usage, reduction of glare and reflection, importance of regular breaks, and promoting movement through sit-stand workstations (Dainoff et al., <xref ref-type="bibr" rid="B34">2012</xref>; , <xref ref-type="bibr" rid="B41">2023</xref>). Research has established a relationship between failing to adhere to good workplace practices and the consequences for computer users&#x00027; health. These include the potential for musculoskeletal disorders, eye strain, and other common health issues related to prolonged computer use (Dainoff et al., <xref ref-type="bibr" rid="B34">2012</xref>; Woo et al., <xref ref-type="bibr" rid="B120">2016</xref>; Boadi-Kusi et al., <xref ref-type="bibr" rid="B20">2022</xref>). According to Nimbarte et al. (<xref ref-type="bibr" rid="B85">2013</xref>), Shahidi et al. (<xref ref-type="bibr" rid="B107">2015</xref>), and Barrett et al. (<xref ref-type="bibr" rid="B15">2020</xref>), the force on the neck increases proportionately as the head angle tilts at a higher degree. The long-term impact of this, as shown in <xref ref-type="table" rid="T1">Table 1</xref>, is a spine damage risk.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Relationship between the human head anatomy and exerted force leading to spine damage.<sup>a</sup></p></caption>
<table frame="box" rules="all">
<thead>
<tr style="background-color:#919498;color:#ffffff">
<th valign="top" align="left"><bold>S/N</bold></th>
<th valign="top" align="center"><bold>Degrees</bold></th>
<th valign="top" align="center"><bold>Force (lb)</bold></th>
<th valign="top" align="left"><bold>Spine damage risk level</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1.</td>
<td valign="top" align="center">0</td>
<td valign="top" align="center">10&#x02013;12</td>
<td valign="top" align="left">Low or no risk</td>
</tr> <tr>
<td valign="top" align="left">2.</td>
<td valign="top" align="center">15</td>
<td valign="top" align="center">27</td>
<td valign="top" align="left">Medium</td>
</tr> <tr>
<td valign="top" align="left">3.</td>
<td valign="top" align="center">30</td>
<td valign="top" align="center">40</td>
<td valign="top" align="left">High</td>
</tr> <tr>
<td valign="top" align="left">4.</td>
<td valign="top" align="center">60</td>
<td valign="top" align="center">50</td>
<td valign="top" align="left">Very high</td>
</tr></tbody>
</table>
<table-wrap-foot>
<p><sup>a</sup><ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=0M5C1BJdVsA">https://www.youtube.com/watch?v=0M5C1BJdVsA</ext-link>.</p>
</table-wrap-foot>
</table-wrap>
<p>In addition, computer users&#x00027; health is typically at risk due to repetitive stress injuries (Borhany et al., <xref ref-type="bibr" rid="B22">2018</xref>; Mowatt et al., <xref ref-type="bibr" rid="B80">2018</xref>; Iyengar et al., <xref ref-type="bibr" rid="B51">2020</xref>; Roy, <xref ref-type="bibr" rid="B100">2020</xref>; Steiger et al., <xref ref-type="bibr" rid="B112">2021</xref>). Repetitive strain injury (RSI) is defined as &#x0201C;a chronic condition that develops because of repetitive, forceful, or awkward hand movements for prolonged periods leading to damage to muscles, tendons, and nerves of the neck, shoulder, forearm, and hand, which can cause pain, weakness, numbness, or impairment of motor control&#x0201D; (Sarla, <xref ref-type="bibr" rid="B102">2019</xref>). This implies that computer use involving extended periods of typing and mouse use without proper ergonomics can increase the risk of RSIs. In addition, maintaining poor posture and not adhering to ergonomic requirements when setting up workstations can contribute to this risk. For example, Borhany et al. (<xref ref-type="bibr" rid="B22">2018</xref>) carried out a study to examine common musculoskeletal problems arising from the repetitive use of computers. They conducted a survey with 150 office workers and found that 67 of these workers suffer from repetitive stress injuries on the low back, neck, shoulder, and wrist/hand. In addition, they found that these injuries were caused by continuous use of computers without breaks, bad lighting, bad posture, and poorly designed ergonomics in offices. While it is typical that workplace tasks are characterized by repetitive tasks and actions, it has become imperative to design workplace technologies to support users in carrying out repetitive tasks without straining any part of the body (Moore, <xref ref-type="bibr" rid="B79">2019</xref>; Johnson et al., <xref ref-type="bibr" rid="B57">2020</xref>).</p>
<p>It is important to state that research has found the impact of computer users&#x00027; health due to repetitive stress injuries and other related health issues on the productivity of users in workplaces. In other words, a well-designed workplace not only improves the user&#x00027;s comfort but also enhances work efficiency and overall job satisfaction (Pereira et al., <xref ref-type="bibr" rid="B93">2019</xref>; Baba et al., <xref ref-type="bibr" rid="B13">2021</xref>; Franke and Nadler, <xref ref-type="bibr" rid="B43">2021</xref>). Pereira et al. (<xref ref-type="bibr" rid="B93">2019</xref>) examined 763 office workers in a 12-week study. They interpreted office productivity to be relative to absenteeism from work due to neck pain. The results from this study show that those exposed to healthy workplace practices and neck-specific exercise training had limited records of absenteeism. Pereira at al. reported that individuals with unhealthy workplace practices and limited access to health promotion information were more likely to be less productive, i.e., absent from work. Baba et al. (<xref ref-type="bibr" rid="B13">2021</xref>) conducted a study involving 50 newly employed staff in an organization. The staff was divided into experimental groups (with healthy workplace practices, e.g., comfortable computer desks) and control groups (with unhealthy workplace practices, such as less comfortable furniture). The study revealed a significant impact on the work productivity of the experimental group compared with the control groups (based on a <italic>t</italic>-test showing that t.cal = 0.08; t.tab = 1.71, where t.cal is the calculated <italic>t</italic>-test value and t.tab is the value of t in the distribution table).</p>
<p>While many organizations focus on employee training and sensitization programs for healthy workplace practices, limited research has been reported on workplace culture, employee training, computer workstation assessment, and the benefits of posture assessment tools. This study explores the potential of persuasive technologies for enhancing effective workplace posture practices. These technologies can serve as posture assessment tools, providing valuable feedback to organizations on the best ways to support their employees.</p></sec>
<sec>
<title>2.2 Persuasive technologies and the workplace</title>
<p>Persuasive technologies and workplace practices are two distinct areas of study and practice, but they intersect in designing user interfaces and technology systems that promote healthy workplace practices for technology users. Overall, this will enhance technology users&#x00027; wellbeing and productivity. Research has explored persuasive technologies in relation to best workplace practices. This includes taking regular breaks (Jafarinaimi et al., <xref ref-type="bibr" rid="B52">2005</xref>; Ludden and Meekhof, <xref ref-type="bibr" rid="B74">2016</xref>; Ren et al., <xref ref-type="bibr" rid="B98">2019</xref>), fitness apps (Mohadis et al., <xref ref-type="bibr" rid="B78">2016</xref>; Ahtinen et al., <xref ref-type="bibr" rid="B4">2017</xref>; Paay et al., <xref ref-type="bibr" rid="B90">2022</xref>), feedback systems and wearable devices (Bootsman et al., <xref ref-type="bibr" rid="B21">2019</xref>; Jiang et al., <xref ref-type="bibr" rid="B55">2021</xref>), workstation movement (Min et al., <xref ref-type="bibr" rid="B77">2015</xref>; Damen et al., <xref ref-type="bibr" rid="B35">2020a</xref>,<xref ref-type="bibr" rid="B36">b</xref>), chair, desk, and monitor height adjustments (Kronenberg and Kuflik, <xref ref-type="bibr" rid="B66">2019</xref>; Kronenberg et al., <xref ref-type="bibr" rid="B67">2022</xref>), posture correction (Min et al., <xref ref-type="bibr" rid="B77">2015</xref>; Bootsman et al., <xref ref-type="bibr" rid="B21">2019</xref>; Kim M. T. et al., <xref ref-type="bibr" rid="B61">2019</xref>), mouse/keyboard use and reduction of glare and reflection (Bailly et al., <xref ref-type="bibr" rid="B14">2016</xref>), and other healthy work behaviors (Berque et al., <xref ref-type="bibr" rid="B19">2011</xref>; Mateevitsi et al., <xref ref-type="bibr" rid="B76">2014</xref>; Gomez-Carmona and Casado-Mansilla, <xref ref-type="bibr" rid="B46">2017</xref>; Jiang et al., <xref ref-type="bibr" rid="B55">2021</xref>; Brombacher et al., <xref ref-type="bibr" rid="B26">2023</xref>; Haliburton et al., <xref ref-type="bibr" rid="B47">2023</xref>; Robledo Yamamoto et al., <xref ref-type="bibr" rid="B99">2023</xref>).</p>
<p><xref ref-type="table" rid="T2">Table 2</xref> summarizes closely related work on persuasive technologies with respect to workplace practices. We present discussions based on instances of workplace practices we listed previously. This includes taking regular breaks, fitness apps, feedback systems, workstation movement, chair, desk, monitor height adjustments, posture correction, mouse/keyboard use, reduction of glare and reflection, and other healthy practices. Jafarinaimi et al. (<xref ref-type="bibr" rid="B52">2005</xref>) developed sensor-based office chairs that encourage users to break away from their computers. Every 2 min, the chair slouches its position from upright to backward bend, signifying the need for computer users to take a break. In view of this, they experimented with a single user (55-year-old university staff). The results from the study showed how the sensor-based office chair greatly influenced the user&#x00027;s attitude to break away from their computer.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Summary of research on persuasive technologies and workplace practices.</p></caption>
<table frame="box" rules="all">
<thead>
<tr style="background-color:#919498;color:#ffffff">
<th valign="top" align="left"><bold>S/N</bold></th>
<th valign="top" align="left"><bold>References</bold></th>
<th valign="top" align="left"><bold>Technology</bold></th>
<th valign="top" align="left" colspan="8"><bold>Workplace practices covered</bold></th>
</tr>
<tr style="background-color:#919498;color:#ffffff">
<th/>
<th/>
<th/>
<th valign="top" align="left"><bold>Chair and desk height</bold></th>
<th valign="top" align="left"><bold>Monitor placement</bold></th>
<th valign="top" align="left"><bold>Keyboard and mouse use</bold></th>
<th valign="top" align="left"><bold>Reduction of glare and reflection</bold></th>
<th valign="top" align="left"><bold>Regular breaks</bold></th>
<th valign="top" align="left"><bold>Workstation movement</bold></th>
<th valign="top" align="left"><bold>Posture correction</bold></th>
<th valign="top" align="left"><bold>Other healthy practices</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1.</td>
<td valign="top" align="left">Haque et al. (<xref ref-type="bibr" rid="B49">2020</xref>)</td>
<td valign="top" align="left">Mobile App</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td/>
<td/>
</tr> <tr>
<td valign="top" align="left">2.</td>
<td valign="top" align="left">Damen et al. (<xref ref-type="bibr" rid="B35">2020a</xref>)</td>
<td valign="top" align="left">Tangible</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td/>
<td/>
</tr> <tr>
<td valign="top" align="left">3.</td>
<td valign="top" align="left">Damen et al. (<xref ref-type="bibr" rid="B36">2020b</xref>)</td>
<td valign="top" align="left">Phones, Tablets and Notebooks</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td/>
<td/>
</tr> <tr>
<td valign="top" align="left">4.</td>
<td valign="top" align="left">Min et al. (<xref ref-type="bibr" rid="B77">2015</xref>)</td>
<td valign="top" align="left">Sensors</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td valign="top" align="left">&#x025A1;</td>
<td/>
</tr> <tr>
<td valign="top" align="left">5.</td>
<td valign="top" align="left">Ludden and Meekhof (<xref ref-type="bibr" rid="B74">2016</xref>)</td>
<td valign="top" align="left">Tangible</td>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td/>
<td/>
<td/>
</tr> <tr>
<td valign="top" align="left">6.</td>
<td valign="top" align="left">Jafarinaimi et al. (<xref ref-type="bibr" rid="B52">2005</xref>)</td>
<td valign="top" align="left">Tangible</td>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td/>
<td/>
<td/>
</tr> <tr>
<td valign="top" align="left">7</td>
<td valign="top" align="left">Kronenberg and Kuflik (<xref ref-type="bibr" rid="B66">2019</xref>)</td>
<td valign="top" align="left">Robot</td>
<td valign="top" align="left">&#x025A1;</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr> <tr>
<td valign="top" align="left">8.</td>
<td valign="top" align="left">Jiang et al. (<xref ref-type="bibr" rid="B55">2021</xref>)</td>
<td valign="top" align="left">Tangible</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
</tr> <tr>
<td valign="top" align="left">9.</td>
<td valign="top" align="left">Mohadis et al. (<xref ref-type="bibr" rid="B78">2016</xref>)</td>
<td valign="top" align="left">Web App</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td/>
<td/>
</tr> <tr>
<td valign="top" align="left">10.</td>
<td valign="top" align="left">Gomez-Carmona and Casado-Mansilla (<xref ref-type="bibr" rid="B46">2017</xref>)</td>
<td valign="top" align="left">Tangible</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
</tr> <tr>
<td valign="top" align="left">11.</td>
<td valign="top" align="left">Bootsman et al. (<xref ref-type="bibr" rid="B21">2019</xref>)</td>
<td valign="top" align="left">Tangible and Mobile App</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td/>
</tr> <tr>
<td valign="top" align="left">12.</td>
<td valign="top" align="left">Kronenberg et al. (<xref ref-type="bibr" rid="B67">2022</xref>)</td>
<td valign="top" align="left">Robot</td>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr> <tr>
<td valign="top" align="left">13.</td>
<td valign="top" align="left">Kim W. et al. (<xref ref-type="bibr" rid="B62">2019</xref>)</td>
<td valign="top" align="left">Robot</td>
<td/>
<td/>
<td/>
<td/>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td/>
</tr> <tr>
<td valign="top" align="left">14.</td>
<td valign="top" align="left">Bailly et al. (<xref ref-type="bibr" rid="B14">2016</xref>)</td>
<td valign="top" align="left">Actuators</td>
<td/>
<td/>
<td valign="top" align="left">&#x025A1;</td>
<td valign="top" align="left">&#x025A1;</td>
<td/>
<td/>
<td/>
<td/>
</tr></tbody>
</table>
</table-wrap>
<p>Mohadis et al. (<xref ref-type="bibr" rid="B78">2016</xref>) developed a low-fidelity web-based prototype to encourage physical activity among older office workers. They considered 23 persuasive principles as they relate to physical activity. These include reduction, tunneling, tailoring, personalization, self-monitoring, simulation, rehearsal, dialogue support, praise, rewards, reminders, suggestions, similarity, social role, credibility support expertise, real-world feel, third-party endorsements verifiability, social support/social learning, social comparison, normative influence, social facilitation, competition, and recognition. Reduction was targeted at making complex tasks simple to complete. Tunneling was driven by using the system to guide users while persuading them to change their behavior. Self-monitoring ensures that users can keep track of their behavior. Simulation covers demonstrating aspects of behaviors to interpret cause-and-effect relationships. Rehearsal provides an opportunity to continue to practice behavior toward change. In addition, the other persuasive principles (dialogue support praise, rewards, reminders, suggestions, similarity, social role, credibility support expertise, real-world feel, third-party endorsements verifiability, social support/social learning, social comparison, normative influence, social facilitation, competition, and recognition) were driven toward enhancing a change in the user&#x00027;s physical activity behaviors. The authors experimented with 10 participants and found that only two (2) persuasive principles were perceived positively. This includes dialogue support and credibility support.</p>
<p>Bootsman et al. (<xref ref-type="bibr" rid="B21">2019</xref>) explored wearable posture monitoring systems for nurses in workplaces. Nurses were considered to carry out repetitive bending throughout their work shifts. The system was designed to track their lower back posture. The system is connected to a mobile application that provides feedback on the different posture positions of users and tips for changing bad postures. The system was evaluated with six (6) nurses (aged between 20 and 65 years) for 4 days during work hours. Based on the intrinsic motivation inventory, the results show interest, perceived competence, usefulness, relatedness, and effort/importance scored more points. In addition, the results from the qualitative analysis show that participants appreciated the comfortability of the wearable system, though they were not in support of the frequency of beeps as it caused some distractions.</p>
<p>Haque et al. (<xref ref-type="bibr" rid="B49">2020</xref>) explored computer workstation movements similar to regular breaks. Unlike the regular break, computer users are encouraged to walk around and keep track of their physical activity level. The authors conducted an experiment with 220 office workers from the United Kingdom, Ireland, Finland, and Bangladesh for 4 weeks while evaluating their &#x0201C;IGO mHealth app.&#x0201D; The app monitors office workers&#x00027; meal intake and work periods to send a 10-min interval walk-around reminder. The app tracks this movement while setting a target limit of 1,000 steps every 10 min. The app incorporates the leaderboard gaming element, encouraging competition through persuasion. The results from this study show a trend in weight loss, and a follow-up interview revealed three (3) persuasive principles that were perceived positively: (1) autonomy, (2) competence, and (3) relatedness. Autonomy shows how the app helped them achieve their set goals. Competence reflects how confident they were about their capability to use the app to perform different tasks. Relatedness shows how they were able to use the app to establish social connections.</p>
<p>Kronenberg et al. (<xref ref-type="bibr" rid="B67">2022</xref>) developed robotic arms that can be used to automatically adjust computer system screens. The robot detects the distance between the screen and the user&#x00027;s seating position. Then, the robot calculates the new screen orientation and adjusts to keep a healthy distance between the users and their computer screens. The authors conducted an experiment with 35 participants (25&#x02013;68 years old) in their workspaces. The results of one-sample Wilcoxon Signed Rank Test show that participants could effectively complete the tasks and scenarios using this system at <italic>(p</italic> &#x0003C; 0.001), the screen did not move at the right pace when it moved (given that <italic>p</italic> = 0.189 was not significant that it moved at the right pace), the screen did not move at the appropriate moment (given that <italic>p</italic> = 0.904 was not significant that it moved at the appropriate moment), the screen was not well-adjusted to users&#x00027; pose (given that <italic>p</italic> = 0.163 was not significant that it was well-adjusted to users pose), and the users felt distracted by the movement of the screen (given that <italic>p</italic> = 0.028 was not significant that users felt less distracted by the movement of the screen).</p>
<p>Kim M. T. et al. (<xref ref-type="bibr" rid="B61">2019</xref>) conducted experiments with a robot to support posture corrections during object lifting with 10 adults (30&#x02013;34 years old). They considered five (5) different joints in the human body: (1) hips, (2) knees, (3) ankles, (4) shoulders, and (5) elbows. The results of their <italic>t</italic>-test analysis showed that the robot significantly lowered the overloading effect in all joints: shoulder (<italic>p</italic> &#x0003C; 0.001), elbow (<italic>p</italic> &#x0003C; 0.001), hip (<italic>p</italic> &#x0003C; 0.001), knee (<italic>p</italic> &#x0003C; 0.001), and ankle (<italic>p</italic> &#x0003C; 0.001). This implies that the robot can promote better posture practices in workplaces.</p>
<p>Bailly et al. (<xref ref-type="bibr" rid="B14">2016</xref>) developed a &#x0201C;<italic>LivingDesktop&#x0201D;</italic> that supports users to reduce reflection from the monitor screen. In addition, the system allows users to adjust the mouse and keyboard positions to improve ergonomics. The authors evaluated the system with 36 desktop users (22&#x02013;40 years old). The results from this study show that users liked adjustable features because they fit their needs for video conferencing, tidying their workspace, and maintaining the right posture. On the other hand, some users criticized the system for its distractions in workspaces.</p>
<p>Jiang et al. (<xref ref-type="bibr" rid="B55">2021</xref>) developed a smart t-shirt wearable application for depression management in workplaces. They considered emotion regulation for depression management based on the movement of the shoulders and arms. The smart t-shirt changes resistance based on users&#x00027; emotions. The fabric maintains a resistance of 180 k&#x003A9; while relaxed (positive emotion) and 400 k&#x003A9; when stretched (negative emotion). In view of this, they tested the smart t-shirt with six (6) healthcare workers for 5 days and found that the smart t-shirts regulated healthcare workers&#x00027; emotions positively at work.</p>
<p>While most of these persuasive technologies have explored user interface design and user experience evaluation, we found other state-of-the-art practices employing machine learning techniques. Machine learning designs present more intelligent and data-oriented systems. This makes them more flexible to learn new patterns while users continue to interact with them. We present the extent to which machine learning has been tailored to enhance workplace practices in the next Section 2.3.</p></sec>
<sec>
<title>2.3 Machine learning and workplace practices</title>
<p>Machine learning can significantly impact the design of products for healthy workplaces. It interprets a wide range of data types, including sensor data, motion, eye movements, and human body movement. Machine learning models can be embedded into wearable devices, phones, and computers, enabling the detection of patterns in data and the optimization of communication with humans based on the diverse data they were trained on. For instance, facial recognition models, as supported in self-service photo booths (Kember, <xref ref-type="bibr" rid="B60">2014</xref>), can detect specified height, width, and head position orientations (Chen et al., <xref ref-type="bibr" rid="B29">2016</xref>).</p>
<p>Some significant research studies have delved into the application of machine learning in the realm of workplace practices. These studies have particularly focused on classifying healthy and active work styles (Rabbi et al., <xref ref-type="bibr" rid="B94">2015</xref>) and automatic adjustments of chair and desk heights (Kronenberg and Kuflik, <xref ref-type="bibr" rid="B66">2019</xref>). In their study, Kronenberg and Kuflik (<xref ref-type="bibr" rid="B66">2019</xref>) proposed a deep learning design for robotic arms that are capable of adjusting chair and desk heights based on body positions. Although the system was still in the implementation stage, initial results demonstrated the potential of embedding a camera in a robotic arm. This camera would interact with their proposed deep learning model.</p>
<p>Despite extensive research within this domain, limited study has been conducted on camera posture positions on the face, head, neck, and arms. While Min et al. (<xref ref-type="bibr" rid="B77">2015</xref>) explored body positions such as the back and spine using sensors, there is still a need to explore additional body positions captured by cameras. In a related study, Mudiyanselage et al. (<xref ref-type="bibr" rid="B81">2021</xref>) evaluated a workplace that involved lifting work-related materials using wearable sensors and various machine learning models (Decision Tree, Support Vector Machine, K-Nearest Neighbor, and Random Forest). The results indicated that the decision tree models outperformed others with a precision accuracy of 99.35%. Although these results were significant and focused on back body positions, there are still gaps within the context of computer workstations.</p>
<p>In another relevant study by Nath et al. (<xref ref-type="bibr" rid="B84">2018</xref>), significant work on lifting arm and wrist positions was considered using wearable sensors and the support vector machine (SVM) model. The study results demonstrated that SVM recognized over 80% of the risky positioning of the arm and wrist.</p>
<p>Hence, based on the persuasive and machine learning perspectives of workplace system design, different body positions are captured, and feedback is provided to support users. Nevertheless, there is a need to understand the extent to which research has supported making these technologies more accessible to diverse users. In the next section, we covered related work done with respect to making workplace posture technologies more accessible.</p></sec>
<sec>
<title>2.4 Accessibility technologies and healthy practices</title>
<p>Most accessibility technologies focus on providing feedback based on machine learning detection to address the needs of disabled individuals (Kulyukin and Gharpure, <xref ref-type="bibr" rid="B68">2006</xref>). Brik et al. (<xref ref-type="bibr" rid="B25">2021</xref>) developed an IoT-machine learning system designed to detect the thermal comfort of a room for disabled persons, offering feedback on the room&#x00027;s thermal condition. The machine learning system was trained on artificial neural networks (ANNs). The performance of ANNs was compared with other algorithms such as logistic regression classifiers (LRC), decision tree classifiers (DTC), and gaussian na&#x000EF;ve bayes classifiers (NBC). ANN performed better, achieving 94% accuracy compared with the other algorithms.</p>
<p>In a related study, Ahmetovic et al. (<xref ref-type="bibr" rid="B3">2019</xref>) investigated navigation-based assistive technologies for the blind and visually impaired. They identified rotation errors and utilized a multi-layer perceptron machine learning model to correct rotation angles, providing positive feedback. The multi-layer perceptron achieved lower rotation errors (18.8&#x000B0; on average) when tested with 11 blind and visually impaired individuals in real-world settings.</p>
<p>Overall, we found that though related studies have explored healthy practices in workplace settings based on different persuasive technologies ranging from mobile to tangible, little work has covered real-time posture detection for important areas of the body such as the back, neck, hands, and head. These parts of the body have been associated with a lot of repetitive workplace stress injuries based on bad postures (Anderson and Oakman, <xref ref-type="bibr" rid="B10">2016</xref>; Catanzarite et al., <xref ref-type="bibr" rid="B27">2018</xref>; Krajnak, <xref ref-type="bibr" rid="B64">2018</xref>). The study by Min et al. (<xref ref-type="bibr" rid="B77">2015</xref>) and Mudiyanselage et al. (<xref ref-type="bibr" rid="B81">2021</xref>) presents closely related concepts. Though these studies explored parts of the body such as the back, spine, arm, and wrists, they used sensors, which might not be comfortable for users of systems. Considering that laptop cameras can detect these parts of the body in an unobstructive way, we explored this in our current study.</p></sec></sec>
<sec sec-type="materials and methods" id="s3">
<title>3 Materials and methods</title>
<p>We outline the materials and methods employed in the study. This aligns with the overarching goal of our research to investigate how individuals can become aware of their unhealthy posture practices in workplaces (both while sitting and standing) and the main research question (RQ: Can persuasive computers be designed to detect unhealthy posture practices in workplaces?). We provide details on the experimental materials used for developing deep learning models, specifically convolutional neural networks and Yolo-V3.</p>
<sec>
<title>3.1 Data collection and preprocessing</title>
<p>We conducted data collection in three phases (phase 1, phase 2, and phase 3). In the first phase, we gathered data by extracting Creative Commons image datasets from YouTube using the search terms ({bad} OR {good} AND {ergonomic posture}). Utilizing the Snip and Sketch tools, we extracted key frames depicting instances of both good and bad ergonomics. In total, we amassed 269 image datasets, comprising 157 examples of bad practices and 112 examples of good practices. The datasets from this initial phase were utilized for the pilot study, which aimed to assess the feasibility of employing machine learning for the detection of posture practices. <xref ref-type="fig" rid="F4">Figures 4</xref>, <xref ref-type="fig" rid="F5">5</xref> provide a cross-section of the datasets collected from YouTube.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Samples of bad practices. <bold>(A)</bold> Reproduced from &#x0201C;Center for Musculoskeletal Function: Workspace Ergonomics and MicroBreak Exercises,&#x0201D; YouTube, uploaded by &#x0201C;Dr. Daniel Yinh DC MS,&#x0201D; 10 Apr 2017, <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=HS2KrPmKySc">https://www.youtube.com/watch?v=HS2KrPmKySc</ext-link>, Permissions: YouTube Terms of Service. <bold>(B)</bold> Reproduced from &#x0201C;Correct Ergonomic Workstation Set-up | Daily Rehab &#x00023;23 | Feat. Tim Keeley | No.112 | Physio REHAB,&#x0201D; YouTube, uploaded by &#x0201C;Physio REHAB,&#x0201D; 13 December 2017, <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=FgW-9_28N8E&#x00026;t=314s">https://www.youtube.com/watch?v=FgW-9_28N8E&#x00026;t=314s</ext-link>, Permissions: YouTube <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/t/terms">Terms of Service</ext-link>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0004.tif"/>
</fig>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Samples of the good practices. <bold>(A)</bold> Reproduced from &#x0201C;Working from home&#x02014;how to set up your laptop (correctly!) | Tim Keeley | Physio REHAB,&#x0201D; YouTube, uploaded by &#x0201C;Physio REHAB,&#x0201D; 19 March 2020, <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=6GlkoFnZpFk">https://www.youtube.com/watch?v=6GlkoFnZpFk</ext-link>, Permissions: YouTube Terms of Service. <bold>(B)</bold> Reproduced from &#x0201C;How to set up workstation at home,&#x0201D; YouTube, uploaded by &#x0201C;Sundial Clinics,&#x0201D; 12 April 2021, <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=wN-Ww1sCWNY">https://www.youtube.com/watch?v=wN-Ww1sCWNY</ext-link>, Permissions: YouTube <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/t/terms">Terms of Service</ext-link>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0005.tif"/>
</fig>
<p>In addition, we gathered more image datasets from Pexels using the Snip and Sketch tools. Pexels offers royalty-free images that match both the good and bad workplace practices of computer users. Utilizing related search terms such as &#x0201C;people AND {using the computer}&#x0201D; OR &#x0201C;{looking head straight}&#x0201D; OR &#x0201C;{sitting in the office},&#x0201D; we extracted key frames, resulting in 618 instances of bad practices and 90 instances of good practices. These datasets were combined with those from Phase 1 to conduct the main study for YOLO-V3.</p>
<p>Recognizing the limitations of convolutional neural networks (CNN) with small datasets (Han et al., <xref ref-type="bibr" rid="B48">2018</xref>), we addressed this concern in Phase 3 by collecting additional datasets. To enhance the dataset, we collected both zoomed-in and zoomed-out resolution images from Pexels. Research has shown that zooming, as one of the techniques of data augmentation, increases the number of datasets (Shorten and Khoshgoftaar, <xref ref-type="bibr" rid="B108">2019</xref>). <xref ref-type="fig" rid="F6">Figures 6</xref>, <xref ref-type="fig" rid="F7">7</xref> offer a cross-section of the datasets collected from Pexels.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Samples of bad posture. Reproduced from <ext-link ext-link-type="uri" xlink:href="https://www.pexels.com/">Pexels</ext-link>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0006.tif"/>
</fig>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Samples of good posture. Reproduced from <ext-link ext-link-type="uri" xlink:href="https://www.pexels.com/">Pexels</ext-link>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0007.tif"/>
</fig>
<p>For the Phase 3 data collection task, we explored the posture dataset available on Kaggle. Kaggle, known for its extensive repository of public datasets for machine learning (Tauchert et al., <xref ref-type="bibr" rid="B116">2020</xref>), provided a valuable resource. We added 311 images depicting good practices to the datasets from Phases 1 and 2. The combined datasets from this phase were used to conduct the main study experiment for convolutional neural networks (CNN). <xref ref-type="fig" rid="F8">Figure 8</xref> showcases a cross-section of sample images collected from Kaggle9. Though Kaggle had a couple of images for bad postures, we considered using the good ones to balance our datasets (we initially had more bad postures compared with good postures).</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>Samples of the good practices. Reproduced from <ext-link ext-link-type="uri" xlink:href="https://www.kaggle.com/datasets/sahasradityathyadi/posture-recognition/">Kaggle</ext-link>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0008.tif"/>
</fig>
<p>Additionally, we defined the two classes as &#x0201C;comfortable&#x0201D; and &#x0201C;uncomfortable.&#x0201D; All the image datasets depicting good practices were assigned to the &#x0201C;comfortable&#x0201D; class, while those depicting bad practices were assigned to the &#x0201C;uncomfortable&#x0201D; class. <xref ref-type="table" rid="T3">Table 3</xref> offers a summary of all the datasets collected for the study. We employed static image datasets as they are applicable to existing real-time detection studies (Huang et al., <xref ref-type="bibr" rid="B50">2019</xref>; Lu et al., <xref ref-type="bibr" rid="B73">2019</xref>), and a video is a sequence of moving images in frames (Lienhart et al., <xref ref-type="bibr" rid="B71">1997</xref>; Perazzi et al., <xref ref-type="bibr" rid="B92">2017</xref>). Hence, the computer vision library provides functionality to help capture this image frame per second and parse them to the machine learning model to quickly predict the class in real time.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Summary of datasets distribution by source.</p></caption>
<table frame="box" rules="all">
<thead>
<tr style="background-color:#919498;color:#ffffff">
<th valign="top" align="left"><bold>S/N</bold></th>
<th valign="top" align="left"><bold>Source</bold></th>
<th valign="top" align="left"><bold>Comfortable</bold></th>
<th valign="top" align="left"><bold>Uncomfortable</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1.</td>
<td valign="top" align="left">YouTube</td>
<td valign="top" align="left">112</td>
<td valign="top" align="left">157</td>
</tr> <tr>
<td valign="top" align="left">2.</td>
<td valign="top" align="left">Pexels</td>
<td valign="top" align="left">90</td>
<td valign="top" align="left">618</td>
</tr> <tr>
<td valign="top" align="left">3.</td>
<td valign="top" align="left">Kaggle</td>
<td valign="top" align="left">311</td>
<td valign="top" align="left">-</td>
</tr> <tr>
<td valign="top" align="left" colspan="2">Total</td>
<td valign="top" align="left">513</td>
<td valign="top" align="left">775</td>
</tr></tbody>
</table>
</table-wrap></sec>
<sec>
<title>3.2 Study description</title>
<p>We covered two significant steps, namely, the pilot and main studies. We explored the feasibility of designing with a few datasets in a pilot study. We present this pilot study to guide the research community on the impact of dataset size in this area. In the main study, we extended the number of datasets to show improvements in the accuracy of models. The datasets collected from YouTube during Phase 1 data collection were pre-processed and used to train the two models for the pilot study (CNN-pilot and Yolo-V3-pilot). We evaluated their performance through loss graphs and in real-time (mean average precision). The mean average precision is a metric for evaluating the accuracy of object detection, especially in real time (Padilla et al., <xref ref-type="bibr" rid="B91">2021</xref>). Furthermore, we combined datasets from YouTube and Pexels to train the YOLO-V3-main model. Additionally, we combined datasets from YouTube, Pexels, and Kaggle to train the CNN-main model. Both the YOLO-V3-main and CNN-main models were developed for the main study.</p>
<sec>
<title>3.2.1 Pilot study</title>
<p>We conducted two experiments for the pilot study. The first experiment involved the development of the Yolo-V3 model (Yolo-V3-pilot). We performed an automatic data annotation task<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> on the entire datasets collected from YouTube. Subsequently, we trained our datasets on the Yolo-V3 model implementation of keras-yolo3<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> on the CPU and we tested this implementation on Google Colab. The second experiment was implemented on the CNN model of Abhishekjl.<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> Our selection of Abhishekjl&#x00027;s framework was based on its relevance in the application of the cv2 python library which is applicable in the recent study by Singh and Agarwal (<xref ref-type="bibr" rid="B110">2022</xref>). In addition, the keras-yolo3 implementation has been recently applied to the current state-of-the-art pedestrian detection system by Jin et al. (<xref ref-type="bibr" rid="B56">2021</xref>) and other systems (Chen and Yeo, <xref ref-type="bibr" rid="B30">2019</xref>; Silva and Jung, <xref ref-type="bibr" rid="B109">2021</xref>). Hence, datasets collected from YouTube were trained on the CNN model (CNN-pilot). The CNN-pilot model was trained and tested on Google Colab.</p></sec>
<sec>
<title>3.2.2 Main study</title>
<p>We conducted two experiments for the main study. In the first experiment, we combined datasets from YouTube and Pexels (from phases 1 and 2 of data collection). We performed automatic data annotation exclusively for datasets from Pexels. The annotation data were then added to pre-existing annotations from the pilot study to train a new Yolo-V3 model (Yolo-V3-main) for the main study, utilizing CPU resources. In the second experiment, we combined datasets from YouTube, Pexels, and Kaggle (from phases 1&#x02013;3) and trained them using Google Colab on the CNN model (CNN-main). Like the pilot study, both Yolo-V3 and CNN models were implemented based on the architectures of Keras-Yolo3 and Abhishekjl. In addition, we tested Yolo-V3-main and CNN-main in Google Colab.</p></sec></sec>
<sec>
<title>3.3 Overview of the CNN model</title>
<p>The CNN model (<xref ref-type="fig" rid="F9">Figure 9</xref>) consists of 2 convolutional 2D layers, 2 max_pooling 2D layers, one flatten, and twi dense layers. Furthermore, the hyperparameters for the model include 3 activation functions (rectified linear unit, RELU) for the convolutional 2D layers and one of the dense layers, one sigmoid activation function added to the last dense layer, Adam optimizer, a learning rate of 1e-3, a batch size of 5, and 10 epochs. The loss of the CNN-pilot model was set to binary_crossentropy. The convolutional 2D layers combine the 2D input after filtering, computing the weights, and adding a bias term (Li et al., <xref ref-type="bibr" rid="B69">2019</xref>). The max_pooling2d layers reduce the input dimensions, leading to a reduction in outputs (Keras<xref ref-type="fn" rid="fn0004"><sup>4</sup></xref>). The flatten layer combines all the layers into a flattened 2-D array that fits into the neural network classifier (Christa et al., <xref ref-type="bibr" rid="B32">2021</xref>). The dense layers are regular, deeply connected neural network layers that are used to return outputs from the model (Keras<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref>). We employed the rectified linear unit (RELU) activation function as it is one of the most widely used functions because of its improved performance (Dubey et al., <xref ref-type="bibr" rid="B40">2022</xref>). The sigmoid function was selected because it is suitable for binary classification tasks (Keras<xref ref-type="fn" rid="fn0006"><sup>6</sup></xref>) as we employed in our study. We employed the Adam optimizer because it is memory efficient and requires limited processing resources (Ogundokun et al., <xref ref-type="bibr" rid="B87">2022</xref>). We set the learning rate of 1 e-3 and batch size 5 as we considered the sensitivity of CNN models to small datasets (Brigato and Iocchi, <xref ref-type="bibr" rid="B24">2021</xref>).</p>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p>CNN model architecture.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0009.tif"/>
</fig>
</sec><sec>
<title>3.4 Overview of the Yolo-V3 model</title>
<p>The Yolo-V3 model (<xref ref-type="fig" rid="F10">Figure 10</xref>) consists of 74 convolutional 2D layers, 71 batch normalization layers, 70 leaky rectified linear unit (RELU) activation layers, two UpSampling2D layers, and one ZeroPadding2D layer. We set the hyperparameters for the model as follows: Adam optimizer, learning rate of 1e-4, and batch size of 16. We consider Adam Optimizer to be appropriate as it is memory efficient and requires limited processing resources (Ogundokun et al., <xref ref-type="bibr" rid="B87">2022</xref>). In addition, we considered a reduced learning rate and batch size because of the number of datasets we have. This will help the model learn efficiently. Unlike CNN, YOLO-V3 yielded more annotated datasets with different dimensions. This is typical with YOLO-V3 data annotations (Diwate et al., <xref ref-type="bibr" rid="B39">2022</xref>). Furthermore, we varied the number of epochs for both the pilot and main studies. We used four epochs for the pilot study (Section 4.12) and a maximum of 40 epochs for the main study (Section 4.2.2). We used the default loss function (binary_crossentropy) for the YOLO model. The convolutional 2D layers combine the 2D input after filtering, computing the weights, and adding a bias term (Li et al., <xref ref-type="bibr" rid="B69">2019</xref>). The batch normalization layer normalizes inputs to ensure that they fit the model as their weights continue to change with each batch that the model processes (Arani et al., <xref ref-type="bibr" rid="B11">2022</xref>; Keras<xref ref-type="fn" rid="fn0007"><sup>7</sup></xref>). The leaky RELU activation layer is a leaky version of a rectified linear unit activation layer (Keras<xref ref-type="fn" rid="fn0008"><sup>8</sup></xref>). It introduces non-linearity among the outputs between layers of a neural network (Xu et al., <xref ref-type="bibr" rid="B122">2020</xref>). The UpSampling2D layer is used to repeat the dimensions of the input to improve its quality (Liu et al., <xref ref-type="bibr" rid="B72">2022</xref>; Keras<xref ref-type="fn" rid="fn0009"><sup>9</sup></xref>). The ZeroPadding2D layer adds extra rows and columns of zeros around images to preserve their aspect ratio while being processed by the model (Dang et al., <xref ref-type="bibr" rid="B37">2020</xref>; Keras<xref ref-type="fn" rid="fn0010"><sup>10</sup></xref>).</p>
<fig id="F10" position="float">
<label>Figure 10</label>
<caption><p>Cross-section of the YOLO-V3 model architecture (full architecture is available at <xref ref-type="supplementary-material" rid="SM1">Appendix A1</xref>).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0010.tif"/>
</fig></sec></sec>
<sec sec-type="results" id="s4">
<title>4 Results</title>
<p>In this section, we present our findings from the pilot and main studies. This section covers reports from our experiments with Yolo-V3 and CNN models using datasets collected from YouTube, Pexels, and Kaggle.</p>
<sec>
<title>4.1 The pilot study</title>
<p>To visualize the feasibility of the study, we developed two models for detecting workplace practices in real time: CNN and Yolo-V3. We chose these models based on their proven capabilities for supporting real-time object detection in previous research (Tan et al., <xref ref-type="bibr" rid="B113">2021</xref>; Alsanad et al., <xref ref-type="bibr" rid="B8">2022</xref>). For the CNN model, we divided the datasets into 75% training and 25% validation datasets (refer to <xref ref-type="table" rid="T4">Table 4</xref>). We used 75% training to 25% validation set split for the CNN model considering how similar tasks employed this ratio (Azimjonov and &#x000D6;zmen, <xref ref-type="bibr" rid="B12">2021</xref>; Bavankumar et al., <xref ref-type="bibr" rid="B17">2021</xref>; Akter et al., <xref ref-type="bibr" rid="B5">2022</xref>). Programmatically, we split the datasets into 90% training and 10% validation datasets for the Yolo-V3 model. The reason for the difference in this split ratio was based on previous studies employing similar ratios, especially for Yolo models (Akut, <xref ref-type="bibr" rid="B6">2019</xref>; Setyadi et al., <xref ref-type="bibr" rid="B106">2023</xref>; Wong et al., <xref ref-type="bibr" rid="B119">2023</xref>).</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p>Summary of dataset distribution for the pilot study.</p></caption>
<table frame="box" rules="all">
<thead>
<tr style="background-color:#919498;color:#ffffff">
<th valign="top" align="left"><bold>S/N</bold></th>
<th valign="top" align="left"><bold>Model</bold></th>
<th valign="top" align="left" colspan="2"><bold>Comfortable</bold></th>
<th valign="top" align="left" colspan="2"><bold>Uncomfortable</bold></th>
<th valign="top" align="left" colspan="2"><bold>Total</bold></th>
</tr>
<tr style="background-color:#919498;color:#ffffff">
<th/>
<th/>
<th valign="top" align="left"><bold>Training</bold></th>
<th valign="top" align="left"><bold>Validation</bold></th>
<th valign="top" align="left"><bold>Training</bold></th>
<th valign="top" align="left"><bold>Validation</bold></th>
<th valign="top" align="left"><bold>Training</bold></th>
<th valign="top" align="left"><bold>Validation</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1.</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">84</td>
<td valign="top" align="left">28</td>
<td valign="top" align="left">118</td>
<td valign="top" align="left">39</td>
<td valign="top" align="left">202</td>
<td valign="top" align="left">67</td>
</tr> <tr>
<td valign="top" align="left">2.</td>
<td valign="top" align="left">Yolo-V3</td>
<td valign="top" align="left">101</td>
<td valign="top" align="left">11</td>
<td valign="top" align="left">141</td>
<td valign="top" align="left">16</td>
<td valign="top" align="left">242</td>
<td valign="top" align="left">27</td>
</tr> <tr>
<td valign="top" align="left" colspan="2">Total</td>
<td valign="top" align="left">185</td>
<td valign="top" align="left">39</td>
<td valign="top" align="left">259</td>
<td valign="top" align="left">55</td>
<td valign="top" align="left">444</td>
<td valign="top" align="left">94</td>
</tr></tbody>
</table>
</table-wrap>
<sec>
<title>4.1.1 CNN pilot study posture detection</title>
<p>We trained the CNN-pilot model for 10 epochs, employing hyperparameter tuning variables such as the stochastic gradient descent optimizer with a learning rate of 1e-3. The results of our CNN training indicate a significant decrease in both training and validation loss values, approaching the 10th epoch (see <xref ref-type="fig" rid="F11">Figure 11</xref>). The validation loss was minimal at epoch 10 compared with the training loss, suggesting a slight underfitting of the model.</p>
<fig id="F11" position="float">
<label>Figure 11</label>
<caption><p>CNN-pilot model&#x00027;s training vs. validation loss.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0011.tif"/>
</fig>
<p>We deployed the model in real-time using the computer vision Python library. Running the model on six real-time test instances, it achieved a mean average precision of 52%. In most instances, better precision values were observed for &#x0201C;comfortable&#x0201D; compared with &#x0201C;uncomfortable&#x0201D; (see <xref ref-type="fig" rid="F12">Figure 12</xref>).</p>
<fig id="F12" position="float">
<label>Figure 12</label>
<caption><p>CNN-pilot model&#x00027;s detection of posture.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0012.tif"/>
</fig></sec>
<sec>
<title>4.1.2 Yolo-V3 pilot study posture detection</title>
<p>The Yolo-V3-pilot model was trained with two layers, employing a strategy of frozen layers to stabilize the loss and unfrozen layers to further reduce the loss, over four epochs. These layers were configured to train with hyper-tuning parameters, including the Adam optimizer with a learning rate of 1e-4 and a batch size of 16. The results of our YOLO-V3 layers 1 and 2 training reveal a decrease in the training loss toward epoch 4 compared with the validation loss (refer to <xref ref-type="fig" rid="F13">Figure 13</xref>). However, it is typical for YOLO-V3 to return a high level of loss values below epoch 10 (Li et al., <xref ref-type="bibr" rid="B70">2020</xref>).</p>
<fig id="F13" position="float">
<label>Figure 13</label>
<caption><p>L-R: Yolo-V3-pilot model&#x00027;s training vs. validation loss (L: Layer 1 and R: Layer 2).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0013.tif"/>
</fig>
<p>We deployed the Yolo-V3-pilot model in real time for the classes &#x0201C;comfortable&#x0201D; and &#x0201C;uncomfortable.&#x0201D; For exceptional cases, we included a &#x0201C;neutral&#x0201D; class. This addition allows Yolo-V3 to handle instances where the detections do not match the expected classes. <xref ref-type="fig" rid="F14">Figures 14</xref>, <xref ref-type="fig" rid="F15">15</xref> showcase instances where the Yolo-V3-pilot model segmented areas of comfort compared with discomfort. In other cases, the model returned &#x0201C;neutral&#x0201D; while one of the researchers tested it in real time using the computer vision Python library. The model achieved a mean average precision of 64% across six real-time test instances.</p>
<fig id="F14" position="float">
<label>Figure 14</label>
<caption><p>L-R: Yolo-V3-pilot model&#x00027;s posture detection: <inline-graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-i0001.tif"/> comfortable; <inline-graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-i0002.tif"/> uncomfortable; <inline-graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-i0003.tif"/> neutral. L: showing areas of discomfort around the eyes and where the hand intercepts the eyes. R: showing discomfort from the eye to the neck regions.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0014.tif"/>
</fig>
<fig id="F15" position="float">
<label>Figure 15</label>
<caption><p>L-R: Yolo-V3-pilot model&#x00027;s posture detection: <inline-graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-i0001.tif"/> comfortable; <inline-graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-i0002.tif"/> uncomfortable; <inline-graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-i0003.tif"/> neutral. L: showing areas of discomfort around the eyes, neck, and back regions. R: showing discomfort from the eye to the neck regions.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0015.tif"/>
</fig>
<p>From the results of both models (CNN-pilot and Yolo-V3-pilot), the Yolo-V3-pilot model&#x00027;s boxes extended beyond the face, capturing other significant areas of comfort or discomfort such as the eyes, neck, and back (see <xref ref-type="fig" rid="F14">Figures 14</xref>, <xref ref-type="fig" rid="F15">15</xref>).</p></sec></sec>
<sec>
<title>4.2 The main study</title>
<p>To enhance the performance of both models (CNN-main and Yolo-V3-main) in the main study, we trained these models on additional datasets collected from Pexels and Kaggle. For the Yolo-V3-main model, we combined YouTube datasets with those from Pexels, while the CNN-main model was trained on a combination of datasets from YouTube, Pexels, and Kaggle. In the case of the CNN-main model, we split the datasets into 75% training and 25% validation sets (refer to <xref ref-type="table" rid="T5">Table 5</xref>). We maintained the 90% training and 10% validation set split for the Yolo-V3-main model.</p>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p>Summary of dataset distribution for the main study.</p></caption>
<table frame="box" rules="all">
<thead>
<tr style="background-color:#919498;color:#ffffff">
<th valign="top" align="left"><bold>S/N</bold></th>
<th valign="top" align="left"><bold>Model</bold></th>
<th valign="top" align="left" colspan="2"><bold>Comfortable</bold></th>
<th valign="top" align="left" colspan="2"><bold>Uncomfortable</bold></th>
<th valign="top" align="left" colspan="2"><bold>Total</bold></th>
</tr>
<tr style="background-color:#919498;color:#ffffff">
<th/>
<th/>
<th valign="top" align="left"><bold>Training</bold></th>
<th valign="top" align="left"><bold>Validation</bold></th>
<th valign="top" align="left"><bold>Training</bold></th>
<th valign="top" align="left"><bold>Validation</bold></th>
<th valign="top" align="left"><bold>Training</bold></th>
<th valign="top" align="left"><bold>Validation</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1.</td>
<td valign="top" align="left">CNN</td>
<td valign="top" align="left">384</td>
<td valign="top" align="left">129</td>
<td valign="top" align="left">581</td>
<td valign="top" align="left">194</td>
<td valign="top" align="left">965</td>
<td valign="top" align="left">323</td>
</tr> <tr>
<td valign="top" align="left">2</td>
<td valign="top" align="left">Yolo-V3</td>
<td valign="top" align="left">182</td>
<td valign="top" align="left">20</td>
<td valign="top" align="left">698</td>
<td valign="top" align="left">77</td>
<td valign="top" align="left">880</td>
<td valign="top" align="left">97</td>
</tr>
<tr>
<td valign="top" align="left" colspan="2"><bold>Total</bold></td>
<td valign="top" align="left"><bold>566</bold></td>
<td valign="top" align="left"><bold>149</bold></td>
<td valign="top" align="left"><bold>1,279</bold></td>
<td valign="top" align="left"><bold>271</bold></td>
<td valign="top" align="left"><bold>1,845</bold></td>
<td valign="top" align="left"><bold>420</bold></td>
</tr></tbody>
</table>
</table-wrap>
<sec>
<title>4.2.1 CNN main study posture detection</title>
<p>We maintained the hyper-tuning parameters from the pilot study for CNN, and the model was trained for 10 epochs. The results of our CNN training indicate a significant decrease in both training and validation loss values, approaching the 10th epoch (see <xref ref-type="fig" rid="F16">Figure 16</xref>). The training loss was minimal at epoch 10 compared with the validation loss, indicating better convergence of the training and validation losses compared with those reported earlier in the pilot study (see <xref ref-type="fig" rid="F11">Figure 11</xref>).</p>
<fig id="F16" position="float">
<label>Figure 16</label>
<caption><p>CNN-main model&#x00027;s training vs. validation loss.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0016.tif"/>
</fig>
<p>In real time, the CNN-main model predicts uncomfortable classes better (<xref ref-type="fig" rid="F17">Figure 17</xref>: 89.6, 98.7, 93.5, and 93.0%). The CNN-main model attained a mean average precision of 91% on 19 real-time test data points.</p>
<fig id="F17" position="float">
<label>Figure 17</label>
<caption><p>CNN-main model&#x00027;s posture detection.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0017.tif"/>
</fig></sec>
<sec>
<title>4.2.2 Yolo-V3 main study posture detection</title>
<p>Like the pilot study, the Yolo-V3-main model was trained with two layers, incorporating frozen layers for a stable loss and unfrozen layers to further reduce the loss. The first layer was set to train for 10 epochs, and the second layer started at the 11th epoch (continuing from the first layer) and concluded at the 39th epoch. These layers were trained with hyper-tuning parameters, including the Adam optimizer with a learning rate of 1e-4 and a batch size of 16. The results for both layers 1 and 2 of the Yolo-V3-main model show that the training and validation loss curves converged at epoch 10 for the first layer and diverged slightly upward at epoch 39 for the second layer (see <xref ref-type="fig" rid="F18">Figure 18</xref>). This implies slight overfitting of our Yolo-V3-main model.</p>
<fig id="F18" position="float">
<label>Figure 18</label>
<caption><p>L-R: Yolo-V3-main model&#x00027;s training vs. validation loss (L: Layer 1 and R: Layer 2).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0018.tif"/>
</fig>
<p>We deployed the Yolo-V3-main model in real time, and the results indicate that the model performed significantly better in detecting both classes, &#x0201C;comfortable&#x0201D; and &#x0201C;uncomfortable&#x0201D; (refer to <xref ref-type="fig" rid="F19">Figure 19</xref>). The Yolo-V3-main model achieved a mean average precision of 92% across 11 real-time test instances.</p>
<fig id="F19" position="float">
<label>Figure 19</label>
<caption><p>L-R: Yolo-V3-main model&#x00027;s posture detection: <inline-graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-i0001.tif"/> comfortable; <inline-graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-i0002.tif"/> uncomfortable; <inline-graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-i0003.tif"/> neutral.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0019.tif"/>
</fig></sec></sec></sec>
<sec sec-type="discussion" id="s5">
<title>5 Discussion</title>
<p>The study explored design opportunities for persuasive systems based on real-time posture detection. We conducted two experiments, namely, the pilot and main studies, utilizing two deep learning algorithms: CNN and Yolo-V3. In this section, we discuss the results and propose design recommendations aligned with the overarching goal of the study, addressing how people can become conscious of their unhealthy posture practices in workplaces, whether sitting or standing. Furthermore, we relate these findings to answering the main research question: RQ: Can we design persuasive computers to detect unhealthy posture practices, such as sitting and standing, in workplaces?</p>
<p>From the pilot study, we observed that the CNN-pilot model tends to generalize its detection based on facial regions, occasionally extending to the neck regions. Additionally, for the CNN-pilot model, we reported on the detection of comfortable and uncomfortable postures with similar precision accuracy values. The lack of generalizability in the model raises concerns, particularly given our overarching goal of ensuring that persuasive technologies encourage people to maintain the right posture practices. It would be more suitable for individuals to be prompted to change their uncomfortable postures more frequently.</p>
<p>In contrast, the Yolo-V3-pilot model, with its anchor boxes, provided more comprehensive coverage and detection of postures. While it is common for Yolo models to generate multiple anchor boxes when detecting objects (Zhang et al., <xref ref-type="bibr" rid="B125">2022</xref>), we observed trends of it detecting various body positions and regions associated with the required postures.</p>
<p>The main study results demonstrated a significant improvement in the CNN-main model compared with the CNN-pilot model. The convergence and drop of the loss values toward epoch 10 were notably pronounced, and the achieved mean average precision of 91% aligns well with the overarching goal of the study. The enhanced recognition of uncomfortable posture positions by the CNN-main model suggests that users of persuasive technologies would be more conscious.</p>
<p>Furthermore, there was a substantial improvement in the performance of the Yolo-V3-main model compared with the Yolo-V3-pilot model. The increased precision around both comfortable and uncomfortable body positions resulted in a mean average precision of 92%. Considering these results, we address the main research question by recommending the following.</p>
<p>D1. Persuasive systems can be customized to detect the posture positions of users. While there are promising prospects with the CNN model, particularly with additional training datasets, the Yolo-V3 model stands out in addressing crucial body positions such as the eyes, face, head, neck, and arms. The successes of Yolo-V3 models have been reported in real-time workplace monitoring, showcasing its capability to report multiple and significant positions (Saumya et al., <xref ref-type="bibr" rid="B103">2020</xref>).</p>
<p>D2. Persuasive systems based on the Yolo-V3 model can be trained to recognize various environmental conditions, such as the lighting conditions of the room, desk height, and leg position of users. While previous study by Min et al. (<xref ref-type="bibr" rid="B77">2015</xref>) demonstrated the potential of using sensor reading based on back and arm movements, expanding to recognize more positions would necessitate multimodal datasets, sensors, and strategically positioned cameras to provide users with comprehensive feedback. It is important to note that this approach may require privacy permissions. The importance of aligning such feedback with users&#x00027; privacy expectations, both in private and social spaces, has been emphasized in the study by Brombacher et al. (<xref ref-type="bibr" rid="B26">2023</xref>). Additionally, a study by Bootsman et al. (<xref ref-type="bibr" rid="B21">2019</xref>) was limited to reading lumbar (back) posture data, overlooking other key postures that directly impact the back, as we have reported (eyes, head, neck, and arms).</p>
<p>D3. Persuasive systems based on the Yolo-V3 model can be trained to provide auditory feedback to users, particularly benefiting individuals with visual impairments. This customization could involve real-world feedback systems, such as a single beep sound for correct posture positions and a buzzer sound for incorrect posture positions. To enhance usability, additional concepts may be implemented, such as helping users locate body positions through a screen reader. Feedback systems, as reported in the study by Brombacher et al. (<xref ref-type="bibr" rid="B26">2023</xref>), have been recognized as effective in capturing users&#x00027; attention, especially when working behind a desk and receiving posture-related feedback.</p>
<sec>
<title>5.1 The present study vs. related studies</title>
<p>We present our methodology and results compared with existing studies. Deep learning models, compared with SVM and other algorithms used in existing studies (Tang et al., <xref ref-type="bibr" rid="B114">2015</xref>; Nath et al., <xref ref-type="bibr" rid="B84">2018</xref>; Mudiyanselage et al., <xref ref-type="bibr" rid="B81">2021</xref>; Zhang and Callaghan, <xref ref-type="bibr" rid="B124">2021</xref>), capture the variability of highly complex patterns in datasets. Hence, while SVM performs significantly better with small datasets, deep learning models require a substantial number of datasets. In a related study (Mudiyanselage et al., <xref ref-type="bibr" rid="B81">2021</xref>), SVM yielded 99.5% with 54 datasets for five weightlifting classes (10, 15, 20, 30, and 35 lbs.). The results from this study showed significant overfitting of the SVM model. In addition, in a related study conducted by Nath et al. (<xref ref-type="bibr" rid="B84">2018</xref>) with 9,069 datasets for three classes of ergonomic weightlifting risks (low, moderate, and high), SVM achieved &#x0007E;80% accuracy.</p>
<p>We employed deep learning models (CNN and Yolo-v3) in this study, considering the variability of good and bad posture patterns that SVM and other non-deep learning models might not significantly capture. While deep learning requires large datasets, we report on our findings (Yolo-v3: 92% and CNN: 91% accuracy values using 2,265 posture images for two classes, good and bad) to propose future work with additional datasets. In another related real-time study by Zhang and Callaghan (<xref ref-type="bibr" rid="B124">2021</xref>) with different human postures (sitting, walking, standing, running, and lying) using deep learning multi-layer perceptron (MLP), the authors reported accuracy up to 82% with few datasets (30 training and 19 testing samples). Nevertheless, results from the study by Tang et al. (<xref ref-type="bibr" rid="B114">2015</xref>) revealed a significant number of misclassifications. Deep neural networks (DNN) in a similar task of human gesture recognition achieved an accuracy of 98.12%. This level of accuracy was attained using a dataset comprising 21,600 images across 10 distinct classes of hand gestures. While Yolo-v3 compared with CNN has not been explored in previous study, our results present the baseline performance of both models to guide future work.</p></sec>
<sec>
<title>5.2 Limitation of the study</title>
<p>While we report these significant findings of our study, we present the following limitations to improve future work. Though we found significant posture practices such as leg position and lying position, our findings are limited to the areas captured by the camera for sitting and standing body postures. Exploring these contexts further in future studies could inform the design of more wearable persuasive devices. In addition, our datasets are limited in size because there are a few instances of them publicly available. In the future, we will explore running experiments to collect additional ground truth datasets to enhance our model. In addition, to comprehensively assess the effectiveness of this technology in different workplaces (work-from-home, offices, and other spaces), a future study should include an evaluation of users&#x00027; perceptions, considering both the advantages and disadvantages. We propose this framework as a valuable posture assessment tool which is applicable to any workplace setting, whether at home or in an office. Evaluating both contexts in future studies would contribute to a more comprehensive understanding of the applicability of technology. Finally, we had variations in the design of both models (YOLO-V3 and CNN); our comparisons might have favored YOLO-V3, especially with the dataset split ratio of 90% training and 10% validation sets. This is inconclusive at this point. We recommend that future studies explore setting the same standards for testing both models.</p></sec>
<sec>
<title>5.3 Implication of future design on system proximity detection and posture</title>
<p>Considering the prospects of posture evaluation based on proximity detection, we designed a system to integrate with our proposed Yolo-V3 and CNN models in the future. It is recommended that a computer user maintain 40 cm from the computer (Woo et al., <xref ref-type="bibr" rid="B120">2016</xref>). To meet this requirement, we modified the proximity detection program by Harsh Jaggi<xref ref-type="fn" rid="fn0011"><sup>11</sup></xref> and presented the preliminary results, as shown in <xref ref-type="fig" rid="F20">Figure 20</xref>.</p>
<fig id="F20" position="float">
<label>Figure 20</label>
<caption><p>Proximity detection of uncomfortable and comfortable posture.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-07-1359906-g0020.tif"/>
</fig></sec></sec>
<sec id="s6">
<title>6 Conclusion and future work</title>
<p>We explored potential designs for persuasive systems based on real-time posture detection. Given how significant persuasive systems and human factor engineering contribute to changing human behavior in workplaces, we conducted experiments using two deep learning models: convolutional neural networks (CNN) and Yolo-V3. These models have proven valuable in real-time detection of emotions, human activities, and behavior in previous research (Tan et al., <xref ref-type="bibr" rid="B113">2021</xref>; Alsanad et al., <xref ref-type="bibr" rid="B8">2022</xref>). Despite their effectiveness in various domains, little attention has been given to designing persuasive systems specifically for promoting proper postures in workplaces. Our overarching goal was to investigate how individuals can become more conscious of their posture practices while sitting and standing with a computer system. Additionally, we aimed to address the main research question: RQ: Can we design persuasive computers to detect unhealthy posture practices (such as sitting and standing) in workplaces?</p>
<p>Hence, based on the results of this study, we conclude with the following key insights:</p>
<list list-type="order">
<list-item><p>Posture detection based on deep learning models would require a lot of datasets to implement.</p></list-item>
<list-item><p>Persuasive systems based on real-time posture detection should be tailored to capture more body positions. Overall, this helps to address more workplace requirements for behavioral changes.</p></list-item>
<list-item><p>There are prospects around eye strains, pupil datasets, and other contexts linked with stress. Hence, the framework of this study can be extended in the future.</p></list-item>
</list>
<p>In conclusion, our study highlights the potential for developing persuasive technologies that are specifically designed to support users in adhering to proper posture practices. The significance of this study prompts consideration for future exploration into themes such as more in-depth studies with large datasets, proximity detection, support for individuals with visual impairments in adopting optimal posture practices, eye strain detection, addressing various workplace requirements, and comparing outcomes of user studies with our technology from different workplaces such as work-from-home contexts, offices, and other ones.</p></sec>
<sec sec-type="data-availability" id="s7">
<title>Data availability statement</title>
<p>The original contributions presented in the study are included in the article/<xref ref-type="sec" rid="s12">Supplementary material</xref>, further inquiries can be directed to the corresponding author.</p></sec>
<sec sec-type="ethics-statement" id="s8">
<title>Ethics statement</title>
<p>Ethical approval was not required for the study involving human data in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was not required in accordance with the national legislation and the institutional requirements.</p></sec>
<sec sec-type="author-contributions" id="s9">
<title>Author contributions</title>
<p>GA: Conceptualization, Data curation, Methodology, Writing &#x02013; original draft, Writing &#x02013; review &#x00026; editing. RO: Supervision, Writing &#x02013; review &#x00026; editing.</p></sec>
</body>
<back>
<sec sec-type="funding-information" id="s10">
<title>Funding</title>
<p>The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.</p>
</sec>
<ack><p>The authors wish to acknowledge the efforts of colleagues who critically reviewed and provided insightful feedback on our study.</p>
</ack>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.</p>
</sec>
<sec sec-type="disclaimer" id="s11">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec sec-type="supplementary-material" id="s12">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fdata.2024.1359906/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fdata.2024.1359906/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.docx" id="SM1" mimetype="application/vnd.openxmlformats-officedocument.wordprocessingml.document" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Data_Sheet_2.xlsx" mimetype="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" xmlns:xlink="http://www.w3.org/1999/xlink"/></sec>
<fn-group>
<fn id="fn0001"><p><sup>1</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/iwinardhyas/auto_annotation/tree/master/auto_annotatation">https://github.com/iwinardhyas/auto_annotation/tree/master/auto_annotatation</ext-link></p></fn>
<fn id="fn0002"><p><sup>2</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/qqwweee/keras-yolo3">https://github.com/qqwweee/keras-yolo3</ext-link></p></fn>
<fn id="fn0003"><p><sup>3</sup><ext-link ext-link-type="uri" xlink:href="https://github.com/Abhishekjl/Facial-Emotion-detection-webcam-">https://github.com/Abhishekjl/Facial-Emotion-detection-webcam-</ext-link></p></fn>
<fn id="fn0004"><p><sup>4</sup><ext-link ext-link-type="uri" xlink:href="https://keras.io/api/layers/pooling_layers/max_pooling2d/">https://keras.io/api/layers/pooling_layers/max_pooling2d/</ext-link></p></fn>
<fn id="fn0005"><p><sup>5</sup><ext-link ext-link-type="uri" xlink:href="https://keras.io/api/layers/core_layers/dense/">https://keras.io/api/layers/core_layers/dense/</ext-link></p></fn>
<fn id="fn0006"><p><sup>6</sup><ext-link ext-link-type="uri" xlink:href="https://keras.io/api/layers/activations/">https://keras.io/api/layers/activations/</ext-link></p></fn>
<fn id="fn0007"><p><sup>7</sup><ext-link ext-link-type="uri" xlink:href="https://keras.io/api/layers/normalization_layers/batch_normalization/">https://keras.io/api/layers/normalization_layers/batch_normalization/</ext-link></p></fn>
<fn id="fn0008"><p><sup>8</sup><ext-link ext-link-type="uri" xlink:href="https://keras.io/api/layers/activation_layers/leaky_relu/">https://keras.io/api/layers/activation_layers/leaky_relu/</ext-link></p></fn>
<fn id="fn0009"><p><sup>9</sup><ext-link ext-link-type="uri" xlink:href="https://keras.io/api/layers/reshaping_layers/up_sampling2d/">https://keras.io/api/layers/reshaping_layers/up_sampling2d/</ext-link></p></fn>
<fn id="fn0010"><p><sup>10</sup><ext-link ext-link-type="uri" xlink:href="https://keras.io/api/layers/reshaping_layers/zero_padding2d/">https://keras.io/api/layers/reshaping_layers/zero_padding2d/</ext-link></p></fn>
<fn id="fn0011"><p><sup>11</sup><ext-link ext-link-type="uri" xlink:href="https://www.linkedin.com/pulse/face-distance-measurement-python-haar-cascade-unlocking-harsh-jaggi">https://www.linkedin.com/pulse/face-distance-measurement-python-haar-cascade-unlocking-harsh-jaggi</ext-link></p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abdullah</surname> <given-names>N. A. A.</given-names></name> <name><surname>Rahmat</surname> <given-names>N. H.</given-names></name> <name><surname>Zawawi</surname> <given-names>F. Z.</given-names></name> <name><surname>Khamsah</surname> <given-names>M. A. N.</given-names></name> <name><surname>Anuarsham</surname> <given-names>A. H.</given-names></name></person-group> (<year>2020</year>). <article-title>Coping with post COVID-19: Can work from home be a new norm?</article-title> <source>Eur. J. Soc. Sci. Stud.</source> <volume>5</volume>:<fpage>933</fpage>. <pub-id pub-id-type="doi">10.46827/ejsss.v5i6.933</pub-id></citation>
</ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahmad</surname> <given-names>H. F.</given-names></name> <name><surname>Mukhtar</surname> <given-names>H.</given-names></name> <name><surname>Alaqail</surname> <given-names>H.</given-names></name> <name><surname>Seliaman</surname> <given-names>M.</given-names></name> <name><surname>Alhumam</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Investigating health-related features and their impact on the prediction of diabetes using machine learning</article-title>. <source>Appl. Sci</source>. <volume>11</volume>:<fpage>1173</fpage>. <pub-id pub-id-type="doi">10.3390/app11031173</pub-id><pub-id pub-id-type="pmid">27885969</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahmetovic</surname> <given-names>D.</given-names></name> <name><surname>Mascetti</surname> <given-names>S.</given-names></name> <name><surname>Bernareggi</surname> <given-names>C.</given-names></name> <name><surname>Guerreiro</surname> <given-names>J.</given-names></name> <name><surname>Oh</surname> <given-names>U.</given-names></name> <name><surname>Asakawa</surname> <given-names>C.</given-names></name></person-group> (<year>2019</year>). <article-title>Deep learning compensation of rotation errors during navigation assistance for people with visual impairments or blindness</article-title>. <source>ACM Trans. Access. Comput.</source> <volume>12</volume>, <fpage>1</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1145/3349264</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ahtinen</surname> <given-names>A.</given-names></name> <name><surname>Andrejeff</surname> <given-names>E.</given-names></name> <name><surname>Harris</surname> <given-names>C.</given-names></name> <name><surname>V&#x000E4;&#x000E4;n&#x000E4;nen</surname> <given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Let&#x00027;s walk at work: persuasion through the brainwolk walking meeting app,&#x0201D;</article-title> in <source>Proceedings of the 21st International Academic Mindtrek Conference</source> (<publisher-loc>Tampere</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>73</fpage>&#x02013;<lpage>82</lpage>.</citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Akter</surname> <given-names>S.</given-names></name> <name><surname>Prodhan</surname> <given-names>R. A.</given-names></name> <name><surname>Pias</surname> <given-names>T. S.</given-names></name> <name><surname>Eisenberg</surname> <given-names>D.</given-names></name> <name><surname>Fresneda Fernandez</surname> <given-names>J.</given-names></name></person-group> (<year>2022</year>). <article-title>M1M2: deep-learning-based real-time emotion recognition from neural activity</article-title>. <source>Sensors</source> <volume>22</volume>:<fpage>8467</fpage>. <pub-id pub-id-type="doi">10.3390/s22218467</pub-id><pub-id pub-id-type="pmid">36366164</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Akut</surname> <given-names>R. R.</given-names></name></person-group> (<year>2019</year>). <article-title>FILM: finding the location of microaneurysms on the retina</article-title>. <source>Biomed. Eng. Lett.</source> <volume>9</volume>, <fpage>497</fpage>&#x02013;<lpage>506</lpage>. <pub-id pub-id-type="doi">10.1007/s13534-019-00136-6</pub-id><pub-id pub-id-type="pmid">31799017</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alaydrus</surname> <given-names>L. L.</given-names></name> <name><surname>Nusraningrum</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Awareness of workstation ergonomics and occurrence of computer-related injuries</article-title>. <source>Ind. J. Publ. Health Res. Dev.</source> <volume>10</volume>:<fpage>9</fpage>. <pub-id pub-id-type="doi">10.5958/0976-5506.2019.04091.9</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alsanad</surname> <given-names>H. R.</given-names></name> <name><surname>Sadik</surname> <given-names>A. Z.</given-names></name> <name><surname>Ucan</surname> <given-names>O. N.</given-names></name> <name><surname>Ilyas</surname> <given-names>M.</given-names></name> <name><surname>Bayat</surname> <given-names>O.</given-names></name></person-group> (<year>2022</year>). <article-title>YOLO-V3 based real-time drone detection algorithm</article-title>. <source>Multimed. Tools Appl.</source> <volume>81</volume>, <fpage>26185</fpage>&#x02013;<lpage>26198</lpage>. <pub-id pub-id-type="doi">10.1007/s11042-022-12939-4</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Anagnostopoulou</surname> <given-names>E.</given-names></name> <name><surname>Magoutas</surname> <given-names>B.</given-names></name> <name><surname>Bothos</surname> <given-names>E.</given-names></name> <name><surname>Mentzas</surname> <given-names>G.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Persuasive technologies for sustainable smart cities: the case of urban mobility,&#x0201D;</article-title> in <source>Companion Proceedings of The 2019 World Wide Web Conference</source> (<publisher-loc>San Francisco, CA</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>73</fpage>&#x02013;<lpage>82</lpage>.</citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anderson</surname> <given-names>S. P.</given-names></name> <name><surname>Oakman</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>Allied health professionals and work-related musculoskeletal disorders: a systematic review</article-title>. <source>Saf. Health Work</source> <volume>7</volume>, <fpage>259</fpage>&#x02013;<lpage>267</lpage>. <pub-id pub-id-type="doi">10.1016/j.shaw.2016.04.001</pub-id><pub-id pub-id-type="pmid">27924228</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arani</surname> <given-names>E.</given-names></name> <name><surname>Gowda</surname> <given-names>S.</given-names></name> <name><surname>Mukherjee</surname> <given-names>R.</given-names></name> <name><surname>Magdy</surname> <given-names>O.</given-names></name> <name><surname>Kathiresan</surname> <given-names>S.</given-names></name> <name><surname>Zonooz</surname> <given-names>B.</given-names></name></person-group> (<year>2022</year>). <article-title>A comprehensive study of real-time object detection networks across multiple domains: a survey</article-title>. <source>arXiv preprint arXiv</source>:<italic>2208.10895</italic>. <pub-id pub-id-type="doi">10.48550/arXiv.2208.10895</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Azimjonov</surname> <given-names>J.</given-names></name> <name><surname>&#x000D6;zmen</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>A real-time vehicle detection and a novel vehicle tracking systems for estimating and monitoring traffic flow on highways</article-title>. <source>Adv. Eng. Informat.</source> <volume>50</volume>:<fpage>101393</fpage>. <pub-id pub-id-type="doi">10.1016/j.aei.2021.101393</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baba</surname> <given-names>E. I.</given-names></name> <name><surname>Baba</surname> <given-names>D. D.</given-names></name> <name><surname>Oborah</surname> <given-names>J. O.</given-names></name></person-group> (<year>2021</year>). <article-title>Effect of office ergonomics on office workers&#x00027; productivity in the polytechnics, Nigeria</article-title>. <source>J. Educ. Pract.</source> <volume>12</volume>, <fpage>67</fpage>&#x02013;<lpage>75</lpage>. <pub-id pub-id-type="doi">10.7176/JEP/12-3-10</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Bailly</surname> <given-names>G.</given-names></name> <name><surname>Sahdev</surname> <given-names>S.</given-names></name> <name><surname>Malacria</surname> <given-names>S.</given-names></name> <name><surname>Pietrzak</surname> <given-names>T.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;LivingDesktop: augmenting desktop workstation with actuated devices,&#x0201D;</article-title> in <source>Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems</source> (<publisher-loc>ACM</publisher-loc>), <fpage>5298</fpage>&#x02013;<lpage>5310</lpage>.</citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barrett</surname> <given-names>J. M.</given-names></name> <name><surname>McKinnon</surname> <given-names>C.</given-names></name> <name><surname>Callaghan</surname> <given-names>J. P.</given-names></name></person-group> (<year>2020</year>). <article-title>Cervical spine joint loading with neck flexion</article-title>. <source>Ergonomics</source> <volume>63</volume>, <fpage>101</fpage>&#x02013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1080/00140139.2019.1677944</pub-id><pub-id pub-id-type="pmid">31594480</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bartlett</surname> <given-names>Y. K.</given-names></name> <name><surname>Webb</surname> <given-names>T. L.</given-names></name> <name><surname>Hawley</surname> <given-names>M. S.</given-names></name></person-group> (<year>2017</year>). <article-title>Using persuasive technology to increase physical activity in people with chronic obstructive pulmonary disease by encouraging regular walking: a mixed-methods study exploring opinions and preferences</article-title>. <source>J. Med. Internet Res</source>. <volume>19</volume>:<fpage>e124</fpage>. <pub-id pub-id-type="doi">10.2196/jmir.6616</pub-id><pub-id pub-id-type="pmid">28428155</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bavankumar</surname> <given-names>S.</given-names></name> <name><surname>Rajalingam</surname> <given-names>B.</given-names></name> <name><surname>Santhoshkumar</surname> <given-names>R.</given-names></name> <name><surname>JawaherlalNehru</surname> <given-names>G.</given-names></name> <name><surname>Deepan</surname> <given-names>P.</given-names></name> <name><surname>Balaraman</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>A real time prediction and classification of face mask detection using CNN model</article-title>. <source>Turk. Online J. Qual. Inquiry</source> <volume>12</volume>, <fpage>7282</fpage>&#x02013;<lpage>7292</lpage>.</citation>
</ref>
<ref id="B18">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Beheshtian</surname> <given-names>N.</given-names></name> <name><surname>Moradi</surname> <given-names>S.</given-names></name> <name><surname>Ahtinen</surname> <given-names>A.</given-names></name> <name><surname>V&#x000E4;&#x000E4;nanen</surname> <given-names>K.</given-names></name> <name><surname>K&#x000E4;hkonen</surname> <given-names>K.</given-names></name> <name><surname>Laine</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Greenlife: a persuasive social robot to enhance the sustainable behavior in shared living spaces,&#x0201D;</article-title> in <source>Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society</source> (<publisher-loc>ACM</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>12</lpage>.</citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berque</surname> <given-names>D.</given-names></name> <name><surname>Burgess</surname> <given-names>J.</given-names></name> <name><surname>Billingsley</surname> <given-names>A.</given-names></name> <name><surname>Johnson</surname> <given-names>S.</given-names></name> <name><surname>Bonebright</surname> <given-names>T. L.</given-names></name> <name><surname>Wethington</surname> <given-names>B.</given-names></name></person-group> (<year>2011</year>). <article-title>&#x0201C;Design and evaluation of persuasive technology to encourage healthier typing behaviors,&#x0201D;</article-title> in <source>Proceedings of the 6th International Conference on Persuasive Technology: Persuasive Technology and Design: Enhancing Sustainability and Health</source>, <fpage>1</fpage>&#x02013;<lpage>10</lpage>.</citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boadi-Kusi</surname> <given-names>S. B.</given-names></name> <name><surname>Adueming</surname> <given-names>P. O. W.</given-names></name> <name><surname>Hammond</surname> <given-names>F. A.</given-names></name> <name><surname>Antiri</surname> <given-names>E. O.</given-names></name></person-group> (<year>2022</year>). <article-title>Computer vision syndrome and its associated ergonomic factors among bank workers</article-title>. <source>Int. J. Occup. Saf. Ergon.</source> <volume>28</volume>, <fpage>1219</fpage>&#x02013;<lpage>1226</lpage>. <pub-id pub-id-type="doi">10.1080/10803548.2021.1897260</pub-id><pub-id pub-id-type="pmid">33648427</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bootsman</surname> <given-names>R.</given-names></name> <name><surname>Markopoulos</surname> <given-names>P.</given-names></name> <name><surname>Qi</surname> <given-names>Q.</given-names></name> <name><surname>Wang</surname> <given-names>Q.</given-names></name> <name><surname>Timmermans</surname> <given-names>A. A.</given-names></name></person-group> (<year>2019</year>). <article-title>Wearable technology for posture monitoring at the workplace</article-title>. <source>Int. J. Hum. Comput. Stud.</source> <volume>132</volume>, <fpage>99</fpage>&#x02013;<lpage>111</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijhcs.2019.08.003</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Borhany</surname> <given-names>T.</given-names></name> <name><surname>Shahid</surname> <given-names>E.</given-names></name> <name><surname>Siddique</surname> <given-names>W. A.</given-names></name> <name><surname>Ali</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>Musculoskeletal problems in frequent computer and internet users</article-title>. <source>J. Fam. Med. Prim. Care</source> <volume>7</volume>, <fpage>337</fpage>&#x02013;<lpage>339</lpage>. <pub-id pub-id-type="doi">10.4103/jfmpc.jfmpc_326_17</pub-id><pub-id pub-id-type="pmid">30090774</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Botter</surname> <given-names>J.</given-names></name> <name><surname>Ellegast</surname> <given-names>R. P.</given-names></name> <name><surname>Burford</surname> <given-names>E. M.</given-names></name> <name><surname>Weber</surname> <given-names>B.</given-names></name> <name><surname>K&#x000F6;nemann</surname> <given-names>R.</given-names></name> <name><surname>Commissaris</surname> <given-names>D. A.</given-names></name></person-group> (<year>2016</year>). <article-title>Comparison of the postural and physiological effects of two dynamic workstations to conventional sitting and standing workstations</article-title>. <source>Ergonomics</source> <volume>59</volume>, <fpage>449</fpage>&#x02013;<lpage>463</lpage>. <pub-id pub-id-type="doi">10.1080/00140139.2015.1080861</pub-id><pub-id pub-id-type="pmid">26387640</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Brigato</surname> <given-names>L.</given-names></name> <name><surname>Iocchi</surname> <given-names>L.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;A close look at deep learning with small data,&#x0201D;</article-title> in <source>2020 25th International Conference on Pattern Recognition (ICPR)</source> (<publisher-loc>Milan</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2490</fpage>&#x02013;<lpage>2497</lpage>.<pub-id pub-id-type="pmid">33685371</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brik</surname> <given-names>B.</given-names></name> <name><surname>Esseghir</surname> <given-names>M.</given-names></name> <name><surname>Merghem-Boulahia</surname> <given-names>L.</given-names></name> <name><surname>Snoussi</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <article-title>An IoT-based deep learning approach to analyse indoor thermal comfort of disabled people</article-title>. <source>Build. Environ</source>. <volume>203</volume>:<fpage>108056</fpage>. <pub-id pub-id-type="doi">10.1016/j.buildenv.2021.108056</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brombacher</surname> <given-names>H.</given-names></name> <name><surname>Houben</surname> <given-names>S.</given-names></name> <name><surname>Vos</surname> <given-names>S.</given-names></name></person-group> (<year>2023</year>). <article-title>Tangible interventions for office work well-being: approaches, classification, and design considerations</article-title>. <source>Behav. Inform. Technol.</source> <volume>2023</volume>, <fpage>1</fpage>&#x02013;<lpage>25</lpage>. <pub-id pub-id-type="doi">10.1080/0144929X.2023.2241561</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Catanzarite</surname> <given-names>T.</given-names></name> <name><surname>Tan-Kim</surname> <given-names>J.</given-names></name> <name><surname>Whitcomb</surname> <given-names>E. L.</given-names></name> <name><surname>Menefee</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>Ergonomics in surgery: a review</article-title>. <source>Urogynecology</source> <volume>24</volume>, <fpage>1</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1097/SPV.0000000000000456</pub-id><pub-id pub-id-type="pmid">28914699</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chandra</surname> <given-names>R.</given-names></name> <name><surname>Bera</surname> <given-names>A.</given-names></name> <name><surname>Manocha</surname> <given-names>D.</given-names></name></person-group> (<year>2021</year>). <article-title>Using graph-theoretic machine learning to predict human driver behavior</article-title>. <source>IEEE Trans. Intell. Transport. Syst.</source> <volume>23</volume>, <fpage>2572</fpage>&#x02013;<lpage>2585</lpage>. <pub-id pub-id-type="doi">10.1109/TITS.2021.3130218</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>Wu</surname> <given-names>J.</given-names></name> <name><surname>Richter</surname> <given-names>K.</given-names></name> <name><surname>Konrad</surname> <given-names>J.</given-names></name> <name><surname>Ishwar</surname> <given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Estimating head pose orientation using extremely low resolution images,&#x0201D;</article-title> in <source>2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)</source> (<publisher-loc>Santa Fe, NM</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>65</fpage>&#x02013;<lpage>68</lpage>.</citation>
</ref>
<ref id="B30">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>W.</given-names></name> <name><surname>Yeo</surname> <given-names>C. K.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Unauthorized parking detection using deep networks at real time,&#x0201D;</article-title> in <source>2019 IEEE International Conference on Smart Computing (SMARTCOMP)</source> (<publisher-loc>Washington, DC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>459</fpage>&#x02013;<lpage>463</lpage>.</citation>
</ref>
<ref id="B31">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cheng</surname> <given-names>L.</given-names></name> <name><surname>Guan</surname> <given-names>Y.</given-names></name> <name><surname>Zhu</surname> <given-names>K.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Recognition of human activities using machine learning methods with wearable sensors,&#x0201D;</article-title> in <source>2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC)</source> (<publisher-loc>Las Vegas, NV</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1109/CCWC.2017.7868369</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Christa</surname> <given-names>G. H.</given-names></name> <name><surname>Jesica</surname> <given-names>J.</given-names></name> <name><surname>Anisha</surname> <given-names>K.</given-names></name> <name><surname>Sagayam</surname> <given-names>K. M.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;CNN-based mask detection system using openCV and MobileNetV2,&#x0201D;</article-title> in <source>2021 3rd International Conference on Signal Processing and Communication (ICPSC)</source> (<publisher-loc>Coimbatore</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>115</fpage>&#x02013;<lpage>119</lpage>.</citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cob-Parro</surname> <given-names>A. C.</given-names></name> <name><surname>Losada-Guti&#x000E9;rrez</surname> <given-names>C.</given-names></name> <name><surname>Marr&#x000F3;n-Romera</surname> <given-names>M.</given-names></name> <name><surname>Gardel-Vicente</surname> <given-names>A.</given-names></name> <name><surname>Bravo-Mu&#x000F1;oz</surname> <given-names>I.</given-names></name></person-group> (<year>2023</year>). <article-title>A new framework for deep learning video based Human Action Recognition on the edge</article-title>. <source>Expert Syst. Appl.</source> <volume>2023</volume>:<fpage>122220</fpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2023.122220</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dainoff</surname> <given-names>M.</given-names></name> <name><surname>Maynard</surname> <given-names>W.</given-names></name> <name><surname>Robertson</surname> <given-names>M.</given-names></name> <name><surname>Andersen</surname> <given-names>J. H.</given-names></name></person-group> (<year>2012</year>). <article-title>Office ergonomics</article-title>. <source>Handb. Hum. Fact. Ergon.</source> <volume>56</volume>, <fpage>1550</fpage>&#x02013;<lpage>1573</lpage>. <pub-id pub-id-type="doi">10.1002/9781118131350.ch56</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Damen</surname> <given-names>I.</given-names></name> <name><surname>Heerkens</surname> <given-names>L.</given-names></name> <name><surname>Van Den Broek</surname> <given-names>A.</given-names></name> <name><surname>Drabbels</surname> <given-names>K.</given-names></name> <name><surname>Cherepennikova</surname> <given-names>O.</given-names></name> <name><surname>Brombacher</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2020a</year>). <article-title>&#x0201C;PositionPeak: stimulating position changes during meetings,&#x0201D;</article-title> in <source>Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems</source> (<publisher-loc>ACM</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>.</citation>
</ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Damen</surname> <given-names>I.</given-names></name> <name><surname>Kok</surname> <given-names>A.</given-names></name> <name><surname>Vink</surname> <given-names>B.</given-names></name> <name><surname>Brombacher</surname> <given-names>H.</given-names></name> <name><surname>Vos</surname> <given-names>S.</given-names></name> <name><surname>Lallemand</surname> <given-names>C.</given-names></name></person-group> (<year>2020b</year>). <article-title>&#x0201C;The hub: facilitating walking meetings through a network of interactive devices,&#x0201D;</article-title> in <source>Companion Publication of the 2020 ACM Designing Interactive Systems Conference</source> (<publisher-loc>ACM</publisher-loc>), <fpage>19</fpage>&#x02013;<lpage>24</lpage>.</citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dang</surname> <given-names>K. B.</given-names></name> <name><surname>Nguyen</surname> <given-names>M. H.</given-names></name> <name><surname>Nguyen</surname> <given-names>D. A.</given-names></name> <name><surname>Phan</surname> <given-names>T. T. H.</given-names></name> <name><surname>Giang</surname> <given-names>T. L.</given-names></name> <name><surname>Pham</surname> <given-names>H. H.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Coastal wetland classification with deep u-net convolutional networks and sentinel-2 imagery: a case study at the tien yen estuary of vietnam</article-title>. <source>Remote Sens</source>. <volume>12</volume>:<fpage>3270</fpage>. <pub-id pub-id-type="doi">10.3390/rs12193270</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Darioshi</surname> <given-names>R.</given-names></name> <name><surname>Lahav</surname> <given-names>E.</given-names></name></person-group> (<year>2021</year>). <article-title>The impact of technology on the human decision-making process</article-title>. <source>Hum. Behav. Emerg. Technol.</source> <volume>3</volume>, <fpage>391</fpage>&#x02013;<lpage>400</lpage>. <pub-id pub-id-type="doi">10.1002/hbe2.257</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Diwate</surname> <given-names>R. B.</given-names></name> <name><surname>Zagade</surname> <given-names>A.</given-names></name> <name><surname>Khodaskar</surname> <given-names>M. R.</given-names></name> <name><surname>Dange</surname> <given-names>V. R.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Optimization in object detection model using YOLO.v3,&#x0201D;</article-title> in <source>2022 International Conference on Emerging Smart Computing and Informatics (ESCI)</source> (<publisher-loc>Pune</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dubey</surname> <given-names>S. R.</given-names></name> <name><surname>Singh</surname> <given-names>S. K.</given-names></name> <name><surname>Chaudhuri</surname> <given-names>B. B.</given-names></name></person-group> (<year>2022</year>). <article-title>Activation functions in deep learning: a comprehensive survey and benchmark</article-title>. <source>Neurocomputing</source> <volume>503</volume>, <fpage>92</fpage>&#x02013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2022.06.111</pub-id><pub-id pub-id-type="pmid">37369638</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="web"><person-group person-group-type="author"><collab>Ergonomics</collab></person-group> (<year>2023</year>). <source>Ergonomics in the Work Environment</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://citeseerx.ist.psu.edu/document?repid=rep1&#x00026;type=pdf&#x00026;doi=dc7357a6b312d394785c2f6beb0fcef29fd9e584">https://citeseerx.ist.psu.edu/document?repid=rep1&#x00026;type=pdf&#x00026;doi=dc7357a6b312d394785c2f6beb0fcef29fd9e584</ext-link> (accessed November 5, 2023).</citation>
</ref>
<ref id="B42">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Faddoul</surname> <given-names>G.</given-names></name> <name><surname>Chatterjee</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;The virtual diabetician: a prototype for a virtual avatar for diabetes treatment using persuasion through storytelling,&#x0201D;</article-title> in <source>Proceedings of the 25th Americas Conference on Information Systems</source> (<publisher-loc>Canc&#x000FA;n</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>10</lpage>.</citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Franke</surname> <given-names>M.</given-names></name> <name><surname>Nadler</surname> <given-names>C.</given-names></name></person-group> (<year>2021</year>). <article-title>Towards a holistic approach for assessing the impact of IEQ on satisfaction, health, and productivity</article-title>. <source>Build. Res. Inform.</source> <volume>49</volume>, <fpage>417</fpage>&#x02013;<lpage>444</lpage>. <pub-id pub-id-type="doi">10.1080/09613218.2020.1788917</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fukuoka</surname> <given-names>Y.</given-names></name> <name><surname>Haskell</surname> <given-names>W.</given-names></name> <name><surname>Lin</surname> <given-names>F.</given-names></name> <name><surname>Vittinghoff</surname> <given-names>E.</given-names></name></person-group> (<year>2019</year>). <article-title>Short-and long-term effects of a mobile phone app in conjunction with brief in-person counseling on physical activity among physically inactive women: the mPED randomized clinical trial</article-title>. <source>J. Am. Med. Assoc. Netw. Open</source> <volume>2</volume>:<fpage>e194281</fpage>. <pub-id pub-id-type="doi">10.1001/jamanetworkopen.2019.4281</pub-id><pub-id pub-id-type="pmid">31125101</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gill</surname> <given-names>R.</given-names></name> <name><surname>Singh</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;A deep learning approach for real time facial emotion recognition,&#x0201D;</article-title> in <source>2021 10th International Conference on System Modeling &#x00026; Advancement in Research Trends (SMART)</source> (<publisher-loc>Moradabad</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>497</fpage>&#x02013;<lpage>501</lpage>.</citation>
</ref>
<ref id="B46">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gomez-Carmona</surname> <given-names>O.</given-names></name> <name><surname>Casado-Mansilla</surname> <given-names>D.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;SmiWork: an interactive smart mirror platform for workplace health promotion,&#x0201D;</article-title> in <source>2017 2nd International Multidisciplinary Conference on Computer and Energy Science (SpliTech)</source> (<publisher-loc>Split</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haliburton</surname> <given-names>L.</given-names></name> <name><surname>Kheirinejad</surname> <given-names>S.</given-names></name> <name><surname>Schmidt</surname> <given-names>A.</given-names></name> <name><surname>Mayer</surname> <given-names>S.</given-names></name></person-group> (<year>2023</year>). <article-title>Exploring smart standing desks to foster a healthier workplace</article-title>. <source>Proc. ACM Interact. Mob. Wear. Ubiquit. Technol.</source> <volume>7</volume>, <fpage>1</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1145/3596260</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Han</surname> <given-names>D.</given-names></name> <name><surname>Liu</surname> <given-names>Q.</given-names></name> <name><surname>Fan</surname> <given-names>W.</given-names></name></person-group> (<year>2018</year>). <article-title>A new image classification method using CNN transfer learning and web data augmentation</article-title>. <source>Expert Syst. Appl.</source> <volume>95</volume>, <fpage>43</fpage>&#x02013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2017.11.028</pub-id><pub-id pub-id-type="pmid">28877918</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haque</surname> <given-names>M. S.</given-names></name> <name><surname>Kangas</surname> <given-names>M.</given-names></name> <name><surname>J&#x000E4;ms&#x000E4;</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <article-title>A persuasive mHealth behavioral change intervention for promoting physical activity in the workplace: feasibility randomized controlled trial</article-title>. <source>JMIR Form. Res</source>. <volume>4</volume>:<fpage>e15083</fpage>. <pub-id pub-id-type="doi">10.2196/preprints.15083</pub-id><pub-id pub-id-type="pmid">32364506</pub-id></citation></ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>R.</given-names></name> <name><surname>Gu</surname> <given-names>J.</given-names></name> <name><surname>Sun</surname> <given-names>X.</given-names></name> <name><surname>Hou</surname> <given-names>Y.</given-names></name> <name><surname>Uddin</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>A rapid recognition method for electronic components based on the improved YOLO-V3 network</article-title>. <source>Electronics</source> <volume>8</volume>:<fpage>825</fpage>. <pub-id pub-id-type="doi">10.3390/electronics8080825</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iyengar</surname> <given-names>K.</given-names></name> <name><surname>Upadhyaya</surname> <given-names>G. K.</given-names></name> <name><surname>Vaishya</surname> <given-names>R.</given-names></name> <name><surname>Jain</surname> <given-names>V.</given-names></name></person-group> (<year>2020</year>). <article-title>COVID-19 and applications of smartphone technology in the current pandemic</article-title>. <source>Diabet. Metabol. Syndr.</source> <volume>14</volume>, <fpage>733</fpage>&#x02013;<lpage>737</lpage>. <pub-id pub-id-type="doi">10.1016/j.dsx.2020.05.033</pub-id><pub-id pub-id-type="pmid">32497963</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jafarinaimi</surname> <given-names>N.</given-names></name> <name><surname>Forlizzi</surname> <given-names>J.</given-names></name> <name><surname>Hurst</surname> <given-names>A.</given-names></name> <name><surname>Zimmerman</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>&#x0201C;Breakaway: an ambient display designed to change human behavior,&#x0201D;</article-title> in <source>CHI&#x00027;05 Extended Abstracts on Human Factors in Computing Systems</source> (<publisher-loc>ACM</publisher-loc>), <fpage>1945</fpage>&#x02013;<lpage>1948</lpage>.</citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jaiswal</surname> <given-names>S.</given-names></name> <name><surname>Nandi</surname> <given-names>G. C.</given-names></name></person-group> (<year>2020</year>). <article-title>Robust real-time emotion detection system using CNN architecture</article-title>. <source>Neural Comput. Appl.</source> <volume>32</volume>, <fpage>11253</fpage>&#x02013;<lpage>11262</lpage>. <pub-id pub-id-type="doi">10.1007/s00521-019-04564-4</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Javad Koohsari</surname> <given-names>M.</given-names></name> <name><surname>Nakaya</surname> <given-names>T.</given-names></name> <name><surname>Shibata</surname> <given-names>A.</given-names></name> <name><surname>Ishii</surname> <given-names>K.</given-names></name> <name><surname>Oka</surname> <given-names>K.</given-names></name></person-group> (<year>2021</year>). <article-title>Working from home after the COVID-19 pandemic: do company employees sit more and move less?</article-title> <source>Sustainability</source> <volume>13</volume>:<fpage>939</fpage>. <pub-id pub-id-type="doi">10.3390/su13020939</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jiang</surname> <given-names>M.</given-names></name> <name><surname>Nanjappan</surname> <given-names>V.</given-names></name> <name><surname>Liang</surname> <given-names>H. N.</given-names></name> <name><surname>ten Bh&#x000F6;mer</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title><italic>In-situ</italic> exploration of emotion regulation via smart clothing: an empirical study of healthcare workers in their work environment</article-title>. <source>Behav. Inform. Technol.</source> <volume>2021</volume>, <fpage>1</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1080/0144929X.2021.1975821</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jin</surname> <given-names>C. J.</given-names></name> <name><surname>Shi</surname> <given-names>X.</given-names></name> <name><surname>Hui</surname> <given-names>T.</given-names></name> <name><surname>Li</surname> <given-names>D.</given-names></name> <name><surname>Ma</surname> <given-names>K.</given-names></name></person-group> (<year>2021</year>). <article-title>The automatic detection of pedestrians under the high-density conditions by deep learning techniques</article-title>. <source>J. Adv. Transport.</source> <volume>2021</volume>, <fpage>1</fpage>&#x02013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1155/2021/1396326</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>A.</given-names></name> <name><surname>Dey</surname> <given-names>S.</given-names></name> <name><surname>Nguyen</surname> <given-names>H.</given-names></name> <name><surname>Groth</surname> <given-names>M.</given-names></name> <name><surname>Joyce</surname> <given-names>S.</given-names></name> <name><surname>Tan</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>A review and agenda for examining how technology-driven changes at work will impact workplace mental health and employee well-being</article-title>. <source>Austr. J. Manag.</source> <volume>45</volume>, <fpage>402</fpage>&#x02013;<lpage>424</lpage>. <pub-id pub-id-type="doi">10.1177/0312896220922292</pub-id></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jupalle</surname> <given-names>H.</given-names></name> <name><surname>Kouser</surname> <given-names>S.</given-names></name> <name><surname>Bhatia</surname> <given-names>A. B.</given-names></name> <name><surname>Alam</surname> <given-names>N.</given-names></name> <name><surname>Nadikattu</surname> <given-names>R. R.</given-names></name> <name><surname>Whig</surname> <given-names>P.</given-names></name></person-group> (<year>2022</year>). <article-title>Automation of human behaviors and its prediction using machine learning</article-title>. <source>Microsyst. Working Technol.</source> <volume>28</volume>, <fpage>1879</fpage>&#x02013;<lpage>1887</lpage>. <pub-id pub-id-type="doi">10.1007/s00542-022-05326-4</pub-id></citation>
</ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Karppinen</surname> <given-names>P.</given-names></name> <name><surname>Oinas-Kukkonen</surname> <given-names>H.</given-names></name> <name><surname>Alah&#x000E4;iv&#x000E4;l&#x000E4;</surname> <given-names>T.</given-names></name> <name><surname>Jokelainen</surname> <given-names>T.</given-names></name> <name><surname>Ker&#x000E4;nen</surname> <given-names>A. M.</given-names></name> <name><surname>Salonurmi</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Persuasive user experiences of a health Behavior Change Support System: a 12-month study for prevention of metabolic syndrome</article-title>. <source>Int. J. Med. Informat.</source> <volume>96</volume>, <fpage>51</fpage>&#x02013;<lpage>61</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijmedinf.2016.02.005</pub-id><pub-id pub-id-type="pmid">26992482</pub-id></citation></ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kember</surname> <given-names>S.</given-names></name></person-group> (<year>2014</year>). <article-title>Face recognition and the emergence of smart photography</article-title>. <source>J. Vis. Cult.</source> <volume>13</volume>, <fpage>182</fpage>&#x02013;<lpage>199</lpage>. <pub-id pub-id-type="doi">10.1177/1470412914541767</pub-id><pub-id pub-id-type="pmid">34502609</pub-id></citation></ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>M. T.</given-names></name> <name><surname>Kim</surname> <given-names>K. B.</given-names></name> <name><surname>Nguyen</surname> <given-names>T. H.</given-names></name> <name><surname>Ko</surname> <given-names>J.</given-names></name> <name><surname>Zabora</surname> <given-names>J.</given-names></name> <name><surname>Jacobs</surname> <given-names>E.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Motivating people to sustain healthy lifestyles using persuasive technology: a pilot study of Korean Americans with prediabetes and type 2 diabetes</article-title>. <source>Pat. Educ. Counsel.</source> <volume>102</volume>, <fpage>709</fpage>&#x02013;<lpage>717</lpage>. <pub-id pub-id-type="doi">10.1016/j.pec.2018.10.021</pub-id><pub-id pub-id-type="pmid">30391298</pub-id></citation></ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>W.</given-names></name> <name><surname>Lorenzini</surname> <given-names>M.</given-names></name> <name><surname>Balatti</surname> <given-names>P.</given-names></name> <name><surname>Nguyen</surname> <given-names>P. D.</given-names></name> <name><surname>Pattacini</surname> <given-names>U.</given-names></name> <name><surname>Tikhanoff</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Adaptable workstations for human-robot collaboration: a reconfigurable framework for improving worker ergonomics and productivity</article-title>. <source>IEEE Robot. Automat. Mag.</source> <volume>26</volume>, <fpage>14</fpage>&#x02013;<lpage>26</lpage>. <pub-id pub-id-type="doi">10.1109/MRA.2018.2890460</pub-id></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ko Ko</surname> <given-names>T.</given-names></name> <name><surname>Dickson-Gomez</surname> <given-names>J.</given-names></name> <name><surname>Yasmeen</surname> <given-names>G.</given-names></name> <name><surname>Han</surname> <given-names>W. W.</given-names></name> <name><surname>Quinn</surname> <given-names>K.</given-names></name> <name><surname>Beyer</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Informal workplaces and their comparative effects on the health of street vendors and home-based garment workers in Yangon, Myanmar: a qualitative study</article-title>. <source>BMC Publ. Health</source> <volume>20</volume>, <fpage>1</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1186/s12889-020-08624-6</pub-id><pub-id pub-id-type="pmid">32306950</pub-id></citation></ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krajnak</surname> <given-names>K.</given-names></name></person-group> (<year>2018</year>). <article-title>Health effects associated with occupational exposure to hand-arm or whole body vibration</article-title>. <source>J. Toxicol. Environ. Health B</source> <volume>21</volume>, <fpage>320</fpage>&#x02013;<lpage>334</lpage>. <pub-id pub-id-type="doi">10.1080/10937404.2018.1557576</pub-id><pub-id pub-id-type="pmid">30583715</pub-id></citation></ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krishna</surname> <given-names>K.</given-names></name> <name><surname>Jain</surname> <given-names>D.</given-names></name> <name><surname>Mehta</surname> <given-names>S. V.</given-names></name> <name><surname>Choudhary</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). <article-title>An lstm based system for prediction of human activities with durations</article-title>. <source>Proc. ACM Interact. Mob. Wear. Ubiquit. Technol.</source> <volume>1</volume>, <fpage>1</fpage>&#x02013;<lpage>31</lpage>. <pub-id pub-id-type="doi">10.1145/3161201</pub-id></citation>
</ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kronenberg</surname> <given-names>R.</given-names></name> <name><surname>Kuflik</surname> <given-names>T.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Automatically adjusting computer screen,&#x0201D;</article-title> in <source>Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization</source>, <fpage>51</fpage>&#x02013;<lpage>56</lpage>.</citation>
</ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kronenberg</surname> <given-names>R.</given-names></name> <name><surname>Kuflik</surname> <given-names>T.</given-names></name> <name><surname>Shimshoni</surname> <given-names>I.</given-names></name></person-group> (<year>2022</year>). <article-title>Improving office workers&#x00027; workspace using a self-adjusting computer screen</article-title>. <source>ACM Transactions on Interactive Intelligent Systems (TiiS).</source> <volume>12</volume>, <fpage>1</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1145/3545993</pub-id></citation>
</ref>
<ref id="B68">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kulyukin</surname> <given-names>V. A.</given-names></name> <name><surname>Gharpure</surname> <given-names>C.</given-names></name></person-group> (<year>2006</year>). <article-title>&#x0201C;Ergonomics-for-one in a robotic shopping cart for the blind,&#x0201D;</article-title> in <source>Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction</source> (<publisher-loc>ACM</publisher-loc>), <fpage>142</fpage>&#x02013;<lpage>149</lpage>.</citation>
</ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>He</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>A directed acyclic graph network combined with CNN and LSTM for remaining useful life prediction</article-title>. <source>IEEE Access</source> <volume>7</volume>, <fpage>75464</fpage>&#x02013;<lpage>75475</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2019.2919566</pub-id></citation>
</ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Zhao</surname> <given-names>Z.</given-names></name> <name><surname>Luo</surname> <given-names>Y.</given-names></name> <name><surname>Qiu</surname> <given-names>Z.</given-names></name></person-group> (<year>2020</year>). <article-title>Real-time pattern-recognition of GPR images with YOLO v3 implemented by tensorflow</article-title>. <source>Sensors</source> <volume>20</volume>:<fpage>6476</fpage>. <pub-id pub-id-type="doi">10.3390/s20226476</pub-id><pub-id pub-id-type="pmid">33198420</pub-id></citation></ref>
<ref id="B71">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lienhart</surname> <given-names>R.</given-names></name> <name><surname>Pfeiffer</surname> <given-names>S.</given-names></name> <name><surname>Effelsberg</surname> <given-names>W.</given-names></name></person-group> (<year>1997</year>). <article-title>Video abstracting</article-title>. <source>Commun. ACM</source> <volume>40</volume>, <fpage>54</fpage>&#x02013;<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1145/265563.265572</pub-id></citation>
</ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>B.</given-names></name> <name><surname>Su</surname> <given-names>S.</given-names></name> <name><surname>Wei</surname> <given-names>J.</given-names></name></person-group> (<year>2022</year>). <article-title>The effect of data augmentation methods on pedestrian object detection</article-title>. <source>Electronics</source> <volume>11</volume>:<fpage>3185</fpage>. <pub-id pub-id-type="doi">10.3390/electronics11193185</pub-id></citation>
</ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lu</surname> <given-names>S.</given-names></name> <name><surname>Wang</surname> <given-names>B.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Chen</surname> <given-names>L.</given-names></name> <name><surname>Linjian</surname> <given-names>M.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name></person-group> (<year>2019</year>). <article-title>A real-time object detection algorithm for video</article-title>. <source>Comput. Electr. Eng.</source> <volume>77</volume>, <fpage>398</fpage>&#x02013;<lpage>408</lpage>. <pub-id pub-id-type="doi">10.1016/j.compeleceng.2019.05.009</pub-id></citation>
</ref>
<ref id="B74">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ludden</surname> <given-names>G. D.</given-names></name> <name><surname>Meekhof</surname> <given-names>L.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Slowing down: introducing calm persuasive technology to increase wellbeing at work,&#x0201D;</article-title> in <source>Proceedings of the 28th Australian Conference on Computer-Human Interaction</source> (<publisher-loc>ACM</publisher-loc>), <fpage>435</fpage>&#x02013;<lpage>441</lpage>.</citation>
</ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mahesh</surname> <given-names>B.</given-names></name></person-group> (<year>2020</year>). <article-title>Machine learning algorithms-a review</article-title>. <source>Int. J. Sci. Res</source>. <volume>9</volume>, <fpage>381</fpage>&#x02013;<lpage>386</lpage>. <pub-id pub-id-type="doi">10.21275/ART20203995</pub-id></citation>
</ref>
<ref id="B76">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mateevitsi</surname> <given-names>V.</given-names></name> <name><surname>Reda</surname> <given-names>K.</given-names></name> <name><surname>Leigh</surname> <given-names>J.</given-names></name> <name><surname>Johnson</surname> <given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>&#x0201C;The health bar: a persuasive ambient display to improve the office worker&#x00027;s well being,&#x0201D;</article-title> in <source>Proceedings of the 5th Augmented Human International Conference</source> (<publisher-loc>ACM</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>2</lpage>.</citation>
</ref>
<ref id="B77">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Min</surname> <given-names>D. A.</given-names></name> <name><surname>Kim</surname> <given-names>Y.</given-names></name> <name><surname>Jang</surname> <given-names>S. A.</given-names></name> <name><surname>Kim</surname> <given-names>K. Y.</given-names></name> <name><surname>Jung</surname> <given-names>S. E.</given-names></name> <name><surname>Lee</surname> <given-names>J. H.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Pretty pelvis: a virtual pet application that breaks sedentary time by promoting gestural interaction,&#x0201D;</article-title> in <source>Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems</source> (<publisher-loc>ACM</publisher-loc>), <fpage>1259</fpage>&#x02013;<lpage>1264</lpage>.</citation>
</ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mohadis</surname> <given-names>H. M.</given-names></name> <name><surname>Mohamad Ali</surname> <given-names>N.</given-names></name> <name><surname>Smeaton</surname> <given-names>A. F.</given-names></name></person-group> (<year>2016</year>). <article-title>Designing a persuasive physical activity application for older workers: understanding end-user perceptions</article-title>. <source>Behav. Technol.</source> <volume>35</volume>, <fpage>1102</fpage>&#x02013;<lpage>1114</lpage>. <pub-id pub-id-type="doi">10.1080/0144929X.2016.1211737</pub-id></citation>
</ref>
<ref id="B79">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Moore</surname> <given-names>P. V.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;OSH and the future of work: benefits and risks of artificial intelligence tools in workplaces,&#x0201D;</article-title> in <source>Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Human Body and Motion: 10th International Conference, DHM 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31. 2019, Proceedings, Part I 21</source> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>292</fpage>&#x02013;<lpage>315</lpage>.</citation>
</ref>
<ref id="B80">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mowatt</surname> <given-names>L.</given-names></name> <name><surname>Gordon</surname> <given-names>C.</given-names></name> <name><surname>Santosh</surname> <given-names>A. B. R.</given-names></name> <name><surname>Jones</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Computer vision syndrome and ergonomic practices among undergraduate university students</article-title>. <source>Int. J. Clin. Practice</source> <volume>72</volume>:<fpage>e13035</fpage>. <pub-id pub-id-type="doi">10.1111/ijcp.13035</pub-id><pub-id pub-id-type="pmid">28980750</pub-id></citation></ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mudiyanselage</surname> <given-names>S. E.</given-names></name> <name><surname>Nguyen</surname> <given-names>P. H. D.</given-names></name> <name><surname>Rajabi</surname> <given-names>M. S.</given-names></name> <name><surname>Akhavian</surname> <given-names>R.</given-names></name></person-group> (<year>2021</year>). <article-title>Automated workers&#x00027; ergonomic risk assessment in manual material handling using sEMG wearable sensors and machine learning</article-title>. <source>Electronics</source> <volume>10</volume>:<fpage>2558</fpage>. <pub-id pub-id-type="doi">10.3390/electronics10202558</pub-id></citation>
</ref>
<ref id="B82">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mujumdar</surname> <given-names>A.</given-names></name> <name><surname>Vaidehi</surname> <given-names>V.</given-names></name></person-group> (<year>2019</year>). <article-title>Diabetes prediction using machine learning algorithms</article-title>. <source>Proc. Comput. Sci.</source> <volume>165</volume>, <fpage>292</fpage>&#x02013;<lpage>299</lpage>. <pub-id pub-id-type="doi">10.1016/j.procs.2020.01.047</pub-id></citation>
</ref>
<ref id="B83">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nanthavanij</surname> <given-names>S.</given-names></name> <name><surname>Jalil</surname> <given-names>S.</given-names></name> <name><surname>Ammarapala</surname> <given-names>V.</given-names></name></person-group> (<year>2008</year>). <article-title>Effects of body height, notebook computer size, and workstation height on recommended adjustments for proper work posture when operating a notebook computer</article-title>. <source>J. Hum. Ergol.</source> <volume>37</volume>, <fpage>67</fpage>&#x02013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.11183/jhe1972.37.67</pub-id><pub-id pub-id-type="pmid">19227194</pub-id></citation></ref>
<ref id="B84">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nath</surname> <given-names>N. D.</given-names></name> <name><surname>Chaspari</surname> <given-names>T.</given-names></name> <name><surname>Behzadan</surname> <given-names>A. H.</given-names></name></person-group> (<year>2018</year>). <article-title>Automated ergonomic risk monitoring using body-mounted sensors and machine learning</article-title>. <source>Adv. Eng. Informat.</source> <volume>38</volume>, <fpage>514</fpage>&#x02013;<lpage>526</lpage>. <pub-id pub-id-type="doi">10.1016/j.aei.2018.08.020</pub-id></citation>
</ref>
<ref id="B85">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nimbarte</surname> <given-names>A. D.</given-names></name> <name><surname>Sivak-Callcott</surname> <given-names>J. A.</given-names></name> <name><surname>Zreiqat</surname> <given-names>M.</given-names></name> <name><surname>Chapman</surname> <given-names>M.</given-names></name></person-group> (<year>2013</year>). <article-title>Neck postures and cervical spine loading among microsurgeons operating with loupes and headlamp</article-title>. <source>IIE Trans. Occup. Erg. Hum. Fact.</source> <volume>1</volume>, <fpage>215</fpage>&#x02013;<lpage>223</lpage>. <pub-id pub-id-type="doi">10.1080/21577323.2013.840342</pub-id></citation>
</ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ofori-Manteaw</surname> <given-names>B. B.</given-names></name> <name><surname>Antwi</surname> <given-names>W. K.</given-names></name> <name><surname>Arthur</surname> <given-names>L.</given-names></name></person-group> (<year>2015</year>). <article-title>Ergonomics and occupational health issues in diagnostic imaging: a survey of the situation at the Korle-Bu Teaching Hospital</article-title>. <source>Ergonomics</source> <volume>19</volume>, <fpage>93</fpage>&#x02013;<lpage>101</lpage>.</citation>
</ref>
<ref id="B87">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ogundokun</surname> <given-names>R. O.</given-names></name> <name><surname>Maskeliunas</surname> <given-names>R.</given-names></name> <name><surname>Misra</surname> <given-names>S.</given-names></name> <name><surname>Dama&#x00161;evi&#x0010D;ius</surname> <given-names>R.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Improved CNN based on batch normalization and adam optimizer,&#x0201D;</article-title> in <source>International Conference on Computational Science and Its Applications</source> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>593</fpage>&#x02013;<lpage>604</lpage>.<pub-id pub-id-type="pmid">37753018</pub-id></citation></ref>
<ref id="B88">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Orji</surname> <given-names>R.</given-names></name> <name><surname>Tondello</surname> <given-names>G. F.</given-names></name> <name><surname>Nacke</surname> <given-names>L. E.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Personalizing persuasive strategies in gameful systems to gamification user types,&#x0201D;</article-title> in <source>Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems</source> (<publisher-loc>ACM</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>14</lpage>.</citation>
</ref>
<ref id="B89">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oyibo</surname> <given-names>K.</given-names></name> <name><surname>Morita</surname> <given-names>P. P.</given-names></name></person-group> (<year>2021</year>). <article-title>Designing better exposure notification apps: the role of persuasive design</article-title>. <source>JMIR Publ. Health Surveill</source>. <volume>7</volume>:<fpage>e28956</fpage>. <pub-id pub-id-type="doi">10.2196/28956</pub-id><pub-id pub-id-type="pmid">34783673</pub-id></citation></ref>
<ref id="B90">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Paay</surname> <given-names>J.</given-names></name> <name><surname>Kjeldskov</surname> <given-names>J.</given-names></name> <name><surname>Papachristos</surname> <given-names>E.</given-names></name> <name><surname>Hansen</surname> <given-names>K. M.</given-names></name> <name><surname>J&#x000F8;rgensen</surname> <given-names>T.</given-names></name> <name><surname>Overgaard</surname> <given-names>K. L.</given-names></name></person-group> (<year>2022</year>). <article-title>Can digital personal assistants persuade people to exercise?</article-title> <source>Behav. <italic>Inform. Technol</italic>.</source> <volume>41</volume>, <fpage>416</fpage>&#x02013;<lpage>432</lpage>. <pub-id pub-id-type="doi">10.1080/0144929X.2020.1814412</pub-id></citation>
</ref>
<ref id="B91">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Padilla</surname> <given-names>R.</given-names></name> <name><surname>Passos</surname> <given-names>W. L.</given-names></name> <name><surname>Dias</surname> <given-names>T. L.</given-names></name> <name><surname>Netto</surname> <given-names>S. L.</given-names></name> <name><surname>Da Silva</surname> <given-names>E. A.</given-names></name></person-group> (<year>2021</year>). <article-title>A comparative analysis of object detection metrics with a companion open-source toolkit</article-title>. <source>Electronics</source> <volume>10</volume>:<fpage>279</fpage>. <pub-id pub-id-type="doi">10.3390/electronics10030279</pub-id></citation>
</ref>
<ref id="B92">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Perazzi</surname> <given-names>F.</given-names></name> <name><surname>Khoreva</surname> <given-names>A.</given-names></name> <name><surname>Benenson</surname> <given-names>R.</given-names></name> <name><surname>Schiele</surname> <given-names>B.</given-names></name> <name><surname>Sorkine-Hornung</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Learning video object segmentation from static images,&#x0201D;</article-title> in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>IEEE</publisher-loc>), <fpage>2663</fpage>&#x02013;<lpage>2672</lpage>.</citation>
</ref>
<ref id="B93">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pereira</surname> <given-names>M.</given-names></name> <name><surname>Comans</surname> <given-names>T.</given-names></name> <name><surname>Sj&#x000F8;gaard</surname> <given-names>G.</given-names></name> <name><surname>Straker</surname> <given-names>L.</given-names></name> <name><surname>Melloh</surname> <given-names>M.</given-names></name> <name><surname>O&#x00027;Leary</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>The impact of workplace ergonomics and neck-specific exercise versus ergonomics and health promotion interventions on office worker productivity: a cluster-randomized trial</article-title>. <source>Scand. J. Work Environ. Health</source> <volume>45</volume>, <fpage>42</fpage>&#x02013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.5271/sjweh.3760</pub-id><pub-id pub-id-type="pmid">30132008</pub-id></citation></ref>
<ref id="B94">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rabbi</surname> <given-names>M.</given-names></name> <name><surname>Pfammatter</surname> <given-names>A.</given-names></name> <name><surname>Zhang</surname> <given-names>M.</given-names></name> <name><surname>Spring</surname> <given-names>B.</given-names></name> <name><surname>Choudhury</surname> <given-names>T.</given-names></name></person-group> (<year>2015</year>). <article-title>Automated personalized feedback for physical activity and dietary behavior change with mobile phones: a randomized controlled trial on adults</article-title>. <source>JMIR mHealth uHealth</source> <volume>3</volume>:<fpage>e4160</fpage>. <pub-id pub-id-type="doi">10.2196/mhealth.4160</pub-id><pub-id pub-id-type="pmid">25977197</pub-id></citation></ref>
<ref id="B95">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rapoport</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Persuasive robotic technologies and the freedom of choice and action</article-title>. <source>Soc. Robot.</source> <volume>12</volume>, <fpage>219</fpage>&#x02013;<lpage>238</lpage>. <pub-id pub-id-type="doi">10.4324/9781315563084-12</pub-id></citation>
</ref>
<ref id="B96">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Reddy</surname> <given-names>U. S.</given-names></name> <name><surname>Thota</surname> <given-names>A. V.</given-names></name> <name><surname>Dharun</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Machine learning techniques for stress prediction in working employees,&#x0201D;</article-title> in <source>2018 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)</source> (<publisher-loc>Madurai</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation>
</ref>
<ref id="B97">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Reeder</surname> <given-names>S.</given-names></name> <name><surname>Kelly</surname> <given-names>L.</given-names></name> <name><surname>Kechavarzi</surname> <given-names>B.</given-names></name> <name><surname>Sabanovic</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>&#x0201C;Breakbot: a social motivator for the workplace,&#x0201D;</article-title> in <source>Proceedings of the 8th ACM Conference on Designing Interactive Systems</source> (<publisher-loc>ACM</publisher-loc>), <fpage>61</fpage>&#x02013;<lpage>64</lpage>.</citation>
</ref>
<ref id="B98">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ren</surname> <given-names>X.</given-names></name> <name><surname>Yu</surname> <given-names>B.</given-names></name> <name><surname>Lu</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>B.</given-names></name> <name><surname>Hu</surname> <given-names>J.</given-names></name> <name><surname>Brombacher</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>LightSit: an unobtrusive health-promoting system for relaxation and fitness microbreaks at work</article-title>. <source>Sensors</source> <volume>19</volume>:<fpage>2162</fpage>. <pub-id pub-id-type="doi">10.3390/s19092162</pub-id><pub-id pub-id-type="pmid">31075965</pub-id></citation></ref>
<ref id="B99">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Robledo Yamamoto</surname> <given-names>F.</given-names></name> <name><surname>Cho</surname> <given-names>J.</given-names></name> <name><surname>Voida</surname> <given-names>A.</given-names></name> <name><surname>Voida</surname> <given-names>S.</given-names></name></person-group> (<year>2023</year>). <article-title>&#x0201C;We are researchers, but we are also humans&#x0201D;: creating a design space for managing graduate student stress</article-title>. <source>ACM Trans. Comput. Hum. Interact.</source> <volume>30</volume>, <fpage>1</fpage>&#x02013;<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1145/3589956</pub-id></citation>
</ref>
<ref id="B100">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Roy</surname> <given-names>D.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Occupational health services and prevention of work-related musculoskeletal problems,&#x0201D;</article-title> in <source>Handbook on Management and Employment Practices</source> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>547</fpage>&#x02013;<lpage>571</lpage>.</citation>
</ref>
<ref id="B101">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sarker</surname> <given-names>I. H.</given-names></name></person-group> (<year>2021</year>). <article-title>Machine learning: algorithms, real-world applications and research directions</article-title>. <source>SN Comput. Sci</source>. <volume>2</volume>:<fpage>160</fpage>. <pub-id pub-id-type="doi">10.1007/s42979-021-00592-x</pub-id><pub-id pub-id-type="pmid">33778771</pub-id></citation></ref>
<ref id="B102">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sarla</surname> <given-names>G. S.</given-names></name></person-group> (<year>2019</year>). <article-title>Excessive use of electronic gadgets: health effects</article-title>. <source>Egypt. J. Intern. Med.</source> <volume>31</volume>, <fpage>408</fpage>&#x02013;<lpage>411</lpage>. <pub-id pub-id-type="doi">10.4103/ejim.ejim_56_19</pub-id></citation>
</ref>
<ref id="B103">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Saumya</surname> <given-names>A.</given-names></name> <name><surname>Gayathri</surname> <given-names>V.</given-names></name> <name><surname>Venkateswaran</surname> <given-names>K.</given-names></name> <name><surname>Kale</surname> <given-names>S.</given-names></name> <name><surname>Sridhar</surname> <given-names>N.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Machine learning based surveillance system for detection of bike riders without helmet and triple rides,&#x0201D;</article-title> in <source>2020 International Conference on Smart Electronics and Communication (ICOSEC)</source> (<publisher-loc>Trichy</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>347</fpage>&#x02013;<lpage>352</lpage>.</citation>
</ref>
<ref id="B104">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schnall</surname> <given-names>R.</given-names></name> <name><surname>Bakken</surname> <given-names>S.</given-names></name> <name><surname>Rojas</surname> <given-names>M.</given-names></name> <name><surname>Travers</surname> <given-names>J.</given-names></name> <name><surname>Carballo-Dieguez</surname> <given-names>A.</given-names></name></person-group> (<year>2015</year>). <article-title>mHealth technology as a persuasive tool for treatment, care and management of persons living with HIV</article-title>. <source>AIDS Behav.</source> <volume>19</volume>, <fpage>81</fpage>&#x02013;<lpage>89</lpage>. <pub-id pub-id-type="doi">10.1007/s10461-014-0984-8</pub-id><pub-id pub-id-type="pmid">25572830</pub-id></citation></ref>
<ref id="B105">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schooley</surname> <given-names>B.</given-names></name> <name><surname>Akgun</surname> <given-names>D.</given-names></name> <name><surname>Duhoon</surname> <given-names>P.</given-names></name> <name><surname>Hikmet</surname> <given-names>N.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Persuasive AI voice-assisted technologies to motivate and encourage physical activity,&#x0201D;</article-title> in <source>Advances in Computer Vision and Computational Biology: Proceedings from IPCV&#x00027;20, HIMS&#x00027;20, BIOCOMP&#x00027;20, and BIOENG&#x00027;20</source> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>363</fpage>&#x02013;<lpage>384</lpage>.</citation>
</ref>
<ref id="B106">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Setyadi</surname> <given-names>A.</given-names></name> <name><surname>Kallista</surname> <given-names>M.</given-names></name> <name><surname>Setianingsih</surname> <given-names>C.</given-names></name> <name><surname>Araffathia</surname> <given-names>R.</given-names></name></person-group> (<year>2023</year>). <article-title>&#x0201C;Deep learning approaches to social distancing compliance and mask detection in dining environment,&#x0201D;</article-title> in <source>2023 IEEE Asia Pacific Conference on Wireless and Mobile (APWiMob)</source> (<publisher-loc>IEEE</publisher-loc>), <fpage>188</fpage>&#x02013;<lpage>194</lpage>.</citation>
</ref>
<ref id="B107">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shahidi</surname> <given-names>B.</given-names></name> <name><surname>Curran-Everett</surname> <given-names>D.</given-names></name> <name><surname>Maluf</surname> <given-names>K. S.</given-names></name></person-group> (<year>2015</year>). <article-title>Psychosocial, physical, and neurophysiological risk factors for chronic neck pain: a prospective inception cohort study</article-title>. <source>J. Pain</source> <volume>16</volume>, <fpage>1288</fpage>&#x02013;<lpage>1299</lpage>. <pub-id pub-id-type="doi">10.1016/j.jpain.2015.09.002</pub-id><pub-id pub-id-type="pmid">26400680</pub-id></citation></ref>
<ref id="B108">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shorten</surname> <given-names>C.</given-names></name> <name><surname>Khoshgoftaar</surname> <given-names>T. M.</given-names></name></person-group> (<year>2019</year>). <article-title>A survey on image data augmentation for deep learning</article-title>. <source>J. Big Data</source> <volume>6</volume>, <fpage>1</fpage>&#x02013;<lpage>48</lpage>. <pub-id pub-id-type="doi">10.1186/s40537-019-0197-0</pub-id></citation>
</ref>
<ref id="B109">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Silva</surname> <given-names>S. M.</given-names></name> <name><surname>Jung</surname> <given-names>C. R.</given-names></name></person-group> (<year>2021</year>). <article-title>A flexible approach for automatic license plate recognition in unconstrained scenarios</article-title>. <source>IEEE Trans. Intell. Transport. Syst.</source> <volume>23</volume>, <fpage>5693</fpage>&#x02013;<lpage>5703</lpage>. <pub-id pub-id-type="doi">10.1109/TITS.2021.3055946</pub-id></citation>
</ref>
<ref id="B110">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Singh</surname> <given-names>A. P.</given-names></name> <name><surname>Agarwal</surname> <given-names>D.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Webcam motion detection in real-time using Python,&#x0201D;</article-title> in <source>2022 International Mobile and Embedded Technology Conference (MECON)</source> (<publisher-loc>Noida</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation>
</ref>
<ref id="B111">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sonntag</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Persuasive AI technologies for healthcare systems,&#x0201D;</article-title> in <source>2016 AAAI Fall Symposium Series.</source> (<publisher-loc>Stanford, CA; Washington, DC</publisher-loc>: <publisher-name>AAAI Press</publisher-name>).</citation>
</ref>
<ref id="B112">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Steiger</surname> <given-names>M.</given-names></name> <name><surname>Bharucha</surname> <given-names>T. J.</given-names></name> <name><surname>Venkatagiri</surname> <given-names>S.</given-names></name> <name><surname>Riedl</surname> <given-names>M. J.</given-names></name> <name><surname>Lease</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;The psychological well-being of content moderators: the emotional labor of commercial moderation and avenues for improving support,&#x0201D;</article-title> in <source>Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems</source> (<publisher-loc>ACM</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>14</lpage>.</citation>
</ref>
<ref id="B113">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tan</surname> <given-names>L.</given-names></name> <name><surname>Huangfu</surname> <given-names>T.</given-names></name> <name><surname>Wu</surname> <given-names>L.</given-names></name> <name><surname>Chen</surname> <given-names>W.</given-names></name></person-group> (<year>2021</year>). <article-title>Comparison of YOLO v3, faster R-CNN, and SSD for real-time pill identification</article-title>. <source>Res. Square</source>. <pub-id pub-id-type="doi">10.21203/rs.3.rs-668895/v1</pub-id><pub-id pub-id-type="pmid">34809632</pub-id></citation></ref>
<ref id="B114">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tang</surname> <given-names>A.</given-names></name> <name><surname>Lu</surname> <given-names>K.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Huang</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name></person-group> (<year>2015</year>). <article-title>A real-time hand posture recognition system using deep neural networks</article-title>. <source>ACM Trans. Intell. Syst. Technol.</source> <volume>6</volume>, <fpage>1</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1145/2735952</pub-id><pub-id pub-id-type="pmid">33804718</pub-id></citation></ref>
<ref id="B115">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tang</surname> <given-names>K. H. D.</given-names></name></person-group> (<year>2022</year>). <article-title>The prevalence, causes and prevention of occupational musculoskeletal disorders</article-title>. <source>Glob. Acad. J. Med. Sci.</source> <volume>4</volume>, <fpage>56</fpage>&#x02013;<lpage>68</lpage>. <pub-id pub-id-type="doi">10.36348/gajms.2022.v04i02.004</pub-id></citation>
</ref>
<ref id="B116">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tauchert</surname> <given-names>C.</given-names></name> <name><surname>Buxmann</surname> <given-names>P.</given-names></name> <name><surname>Lambinus</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Crowdsourcing data science: a qualitative analysis of organizations&#x00027; usage of kaggle competitions,&#x0201D;</article-title> in <source>Proceedings of the 53rd Hawaii International Conference on System Sciences</source>, <fpage>229</fpage>&#x02013;<lpage>238</lpage>.</citation>
</ref>
<ref id="B117">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>van de Wijdeven</surname> <given-names>B.</given-names></name> <name><surname>Visser</surname> <given-names>B.</given-names></name> <name><surname>Daams</surname> <given-names>J.</given-names></name> <name><surname>Kuijer</surname> <given-names>P. P.</given-names></name></person-group> (<year>2023</year>). <article-title>A first step towards a framework for interventions for individual working practice to prevent work-related musculoskeletal disorders: a scoping review</article-title>. <source>BMC Musculoskelet. Disord</source>. <volume>24</volume>:<fpage>87</fpage>. <pub-id pub-id-type="doi">10.1186/s12891-023-06155-w</pub-id><pub-id pub-id-type="pmid">36726094</pub-id></citation></ref>
<ref id="B118">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>R.</given-names></name> <name><surname>Bush-Evans</surname> <given-names>R.</given-names></name> <name><surname>Arden-Close</surname> <given-names>E.</given-names></name> <name><surname>Bolat</surname> <given-names>E.</given-names></name> <name><surname>McAlaney</surname> <given-names>J.</given-names></name> <name><surname>Hodge</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>Transparency in persuasive technology, immersive technology, and online marketing: facilitating users&#x00027; informed decision making and practical implications</article-title>. <source>Comput. Hum. Behav</source>. <volume>139</volume>:<fpage>107545</fpage>. <pub-id pub-id-type="doi">10.1016/j.chb.2022.107545</pub-id></citation>
</ref>
<ref id="B119">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wong</surname> <given-names>T. L.</given-names></name> <name><surname>Chou</surname> <given-names>K. S.</given-names></name> <name><surname>Wong</surname> <given-names>K. L.</given-names></name> <name><surname>Tang</surname> <given-names>S. K.</given-names></name></person-group> (<year>2023</year>). <article-title>Dataset of public objects in uncontrolled environment for navigation aiding</article-title>. <source>Data</source> <volume>8</volume>:<fpage>42</fpage>. <pub-id pub-id-type="doi">10.3390/data8020042</pub-id></citation>
</ref>
<ref id="B120">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Woo</surname> <given-names>E. H. C.</given-names></name> <name><surname>White</surname> <given-names>P.</given-names></name> <name><surname>Lai</surname> <given-names>C. W. K.</given-names></name></person-group> (<year>2016</year>). <article-title>Ergonomics standards and guidelines for computer workstation design and the impact on users&#x00027; health-a review</article-title>. <source>Ergonomics</source> <volume>59</volume>, <fpage>464</fpage>&#x02013;<lpage>475</lpage>. <pub-id pub-id-type="doi">10.1080/00140139.2015.1076528</pub-id><pub-id pub-id-type="pmid">26224145</pub-id></citation></ref>
<ref id="B121">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Workineh</surname> <given-names>S. A.</given-names></name> <name><surname>Yamaura</surname> <given-names>H.</given-names></name></person-group> (<year>2016</year>). <article-title>Multi-position ergonomic computer workstation design to increase comfort of computer work</article-title>. <source>Int. J. Indus. Erg.</source> <volume>53</volume>, <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1016/j.ergon.2015.10.005</pub-id></citation>
</ref>
<ref id="B122">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>Z.</given-names></name> <name><surname>Du</surname> <given-names>B.</given-names></name> <name><surname>Zhang</surname> <given-names>M.</given-names></name> <name><surname>Liu</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Reluplex made more practical: leaky ReLU,&#x0201D;</article-title> in <source>2020 IEEE Symposium on Computers and Communications (ISCC)</source> (<publisher-loc>Rennes</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>7</lpage>.<pub-id pub-id-type="pmid">37484459</pub-id></citation></ref>
<ref id="B123">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Peng</surname> <given-names>H.</given-names></name> <name><surname>Wu</surname> <given-names>R.</given-names></name></person-group> (<year>2019</year>). <article-title>Prediction of academic performance associated with internet usage behaviors using machine learning algorithms</article-title>. <source>Comput. Hum. Behav.</source> <volume>98</volume>, <fpage>166</fpage>&#x02013;<lpage>173</lpage>. <pub-id pub-id-type="doi">10.1016/j.chb.2019.04.015</pub-id></citation>
</ref>
<ref id="B124">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>S.</given-names></name> <name><surname>Callaghan</surname> <given-names>V.</given-names></name></person-group> (<year>2021</year>). <article-title>Real-time human posture recognition using an adaptive hybrid classifier</article-title>. <source>Int. J. Machine Learn. Cybernet.</source> <volume>12</volume>, <fpage>489</fpage>&#x02013;<lpage>499</lpage>. <pub-id pub-id-type="doi">10.1007/s13042-020-01182-8</pub-id></citation>
</ref>
<ref id="B125">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Ma</surname> <given-names>B.</given-names></name> <name><surname>Hu</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name></person-group> (<year>2022</year>). <article-title>Accurate cotton diseases and pests detection in complex background based on an improved YOLOX model</article-title>. <source>Comput. Electr. Agri</source>. <volume>203</volume>:<fpage>107484</fpage>. <pub-id pub-id-type="doi">10.1016/j.compag.2022.107484</pub-id></citation>
</ref>
</ref-list>
</back>
</article>