<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Computer Science | Mobile and Ubiquitous Computing section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/computer-science/sections/mobile-and-ubiquitous-computing</link>
        <description>RSS Feed for Mobile and Ubiquitous Computing section in the Frontiers in Computer Science journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-12T08:16:01.681+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1763420</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1763420</link>
        <title><![CDATA[Hardware and software system for adaptive precision agriculture management within the consolidation of modern agrotechnologies in the crop production sector of the Kyrgyz Republic]]></title>
        <pubdate>2026-03-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Alex Karpov</author><author>Oleg Dorofeev</author><author>Yuliya Smirnova</author><author>Irina Kulibaba</author><author>Yana Beresneva</author>
        <description><![CDATA[BackgroundKyrgyzstan’s economy depends on agriculture. Of strategic importance are the natural conditions that form various ecosystems in different regions. The article highlights the problems that can be addressed by developing a unified software and hardware complex that ensures the fulfillment of the tasks associated with the agricultural production cycle.Materials and methodsThe comprehensive application of systems analysis, system engineering, and object-oriented design methods enabled the development of models for the software and hardware complex. These models were used during the development phase following the incremental life cycle model.ResultsThe article presents models of the software and hardware structure, detailing the characteristics of each component and the categories of users along with their functionalities. Examples of user tasks and corresponding scenarios for using the digital system have been created. A prototype user interface has also been developed.ConclusionThe use of this development fosters the creation of favorable conditions for crop cultivation in accordance with modern agricultural technology principles, allowing for ongoing and objective monitoring of plant health and farm facilities. The results can be used to create a national digital agri-food system that promotes sustainable development and food security.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1700489</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1700489</link>
        <title><![CDATA[Bursting the bubble: data leakage and inflated deep learning accuracy in multivariate time-series frailty classification]]></title>
        <pubdate>2026-02-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Charmayne Mary Lee Hughes</author><author>Yan Zhang</author>
        <description><![CDATA[BackgroundAccurate prediction of frailty in older adults is crucial for preventing adverse outcomes, yet distinguishing frail, pre-frail, and non-frail states remains challenging. A recent study applied InceptionTime to the GSTRIDE dataset and reported near-perfect multi-class frailty prediction (>98% accuracy), exceeding values typically observed in comparable studies.MethodsWe conducted a methodological re-evaluation and replication of this pipeline to assess the robustness of reported performance. Corrections included subject-wise data partitioning, feature scaling within training folds, and non-overlapping sliding time windows applied separately to each subset to prevent potential leakage.ResultsReimplementation of the original pipeline reproduced the previously reported high accuracy. After applying the corrected framework, overall recall and precision decreased (50.9 and 52.3%, respectively), providing a more conservative, data-specific estimate of model generalizability. Per-class analysis indicated reductions across all categories, with Frail-class recall dropping to 21.4%, highlighting the particular challenge of identifying high-risk individuals.ConclusionThe findings suggest that methodological factors, such as data leakage, likely contributed to the previously reported high performance. Under rigorous controls, frailty prediction is challenging, particularly for the frail class, underscoring the need for careful evaluation of model generalizability.SignificanceThis study illustrates the importance of transparent, methodologically sound pipelines in clinical AI research. By providing a reproducible framework for frailty prediction, we aim to support future studies in obtaining realistic performance estimates and developing clinically meaningful models.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1752141</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1752141</link>
        <title><![CDATA[A biometric dataset for unconditioned gait identification using onboard smartphone sensors]]></title>
        <pubdate>2026-01-27T00:00:00Z</pubdate>
        <category>Data Report</category>
        <author>Ladeh Sardar Abdulrahman</author><author>Azhin T. Sabir</author><author>Halgurd S. Maghdid</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1663987</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1663987</link>
        <title><![CDATA[Classification of smartphone users as adult or child in both constrained and non-constrained environments using mRMR-based feature selection and an ensemble classifier]]></title>
        <pubdate>2026-01-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nikhat H. Faheem</author><author>Saad Yunus Sait</author>
        <description><![CDATA[A Smartphone is an important electronic device used by people of all ages. Excessive usage of smartphones among children can lead to various mental and physical problems. Hence, we believe that a control mechanism, if introduced, can help provide suitable content to users based on their age group. Our work focuses on detecting the age of the user based on their smartphone usage habits. To accomplish this, most of the previous work has collected datasets from users either in constrained or non-constrained environments. But in our work, we have collected data from both environments, and we were able to identify a generalized model to handle both environments’ data. To fill this research gap, we have collected our dataset while performing tasks such as typing, swiping, tapping, zooming, and measuring finger size. In a constrained environment, users must hold the phone either in their hands or on a table to finish the tasks. Whereas in a non-constrained environment, users are permitted to move freely while performing tasks. To achieve superior performance on both constrained and non-constrained data, we extracted some new statistical features, followed by Minimum Redundancy Maximum Relevance (mRMR) feature selection to select an appropriate set of features; the optimal feature count was identified using the cross-validation methods. We have used an ensemble classifier for classification, which takes a vote on the predictions of XGBoost, Random Forest (RF), and support vector machine (SVM). In our work, we have achieved 98.66% accuracy in constrained environments and 91.93% in non-constrained environments.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1696178</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1696178</link>
        <title><![CDATA[Assisting annotators of wearable activity recognition datasets through automated sensor-based suggestions]]></title>
        <pubdate>2025-11-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lukas Günthermann</author><author>Ivor Simpson</author><author>Phil Birch</author><author>Daniel Roggen</author>
        <description><![CDATA[Wearable Activity Recognition consists of recognizing actions of people from on-body sensor data using machine learning. Developing suitable machine learning models typically requires substantial amounts of annotated training data. Manually annotating large datasets is tedious and time intensive. Interactive machine learning systems can be used to support this, with the aim of reducing annotation time or improving accuracy. We contribute a new web-based annotation tool for time series signals synchronized with a video recording with integrated automated suggestions, facilitated by ML models, to assist and improve the annotation process of annotators. This is enabled by focusing user attention toward points of interest. This is particularly relevant for the annotation of long periodic activities to allow fast navigation in large datasets without skipping start and end points of activities. To evaluate the efficacy of this system, we conducted a user study with 32 participants who were tasked with annotating modes of locomotion in a dataset composed of multiple long (over 12 h) consecutive sensor recordings captured by body-worn accelerometers. We analyzed the quantitative impact on annotation performance and the qualitative impact on the user experience. The results show that the implemented annotation assistance improved the annotation quality by 11% F1 Score but reduced annotation speed by 20%, whereas the NASA Task Load Index results show that participants perceived the assistance as beneficial for annotation speed but not for annotation quality.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1575404</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1575404</link>
        <title><![CDATA[Unobtrusive stress detection using wearables: application and challenges in a university setting]]></title>
        <pubdate>2025-08-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Peter Neigel</author><author>Andrew Vargo</author><author>Benjamin Tag</author><author>Koichi Kise</author>
        <description><![CDATA[IntroductionIn theory, wearable physiological sensing devices offer an opportunity for institutions to monitor and manage the health and well-being of a group of people. For instance, schools or universities could leverage these devices to track rising stress levels or detect signs of illness among students. Advances in sensing accuracy and utility design in wearables might make this feasible; however, real-world adoption faces challenges, as users often fail to wear or use these devices consistently and correctly. Additionally, institutional monitoring raises privacy concerns.MethodsIn this study, we analyze real-world data from a cohort of 103 Japanese university students to identify periods of cyclical stress while ensuring individual privacy through aggregation. We identify potential stress patterns by observing elevated waking heart rate (HR) and maximum waking HR, supported by related metrics such as sleep HR, sleep heart rate variability (HRV), activity patterns, and sleep phases.ResultsThe physiological changes align with significant academic and societal events, indicating a strong link to stress.DiscussionOur findings demonstrate the potential of consumer wearables to detect collective changes in stress biomarkers within a cohort using in-the-wild data, i.e., data that is noisy and has gaps. Furthermore, we explore how universities could implement such monitoring in practice, highlighting both the potential benefits and challenges of real-world application.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1597143</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1597143</link>
        <title><![CDATA[Deep learning for human locomotion analysis in lower-limb exoskeletons: a comparative study]]></title>
        <pubdate>2025-08-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Omar Coser</author><author>Christian Tamantini</author><author>Matteo Tortora</author><author>Leonardo Furia</author><author>Rosa Sicilia</author><author>Loredana Zollo</author><author>Paolo Soda</author>
        <description><![CDATA[IntroductionWearable robotics for lower-limb assistance is increasingly investigated to enhance mobility in individuals with physical impairments and to augment performance in able-bodied users. A major challenge in this domain is the development of accurate and adaptive control systems that ensure seamless human-robot interaction across diverse terrains. While neural networks have recently shown promise in time-series analysis, no prior work has tackled the combined task of classifying ground conditions into five terrain classes and estimating high-level locomotion parameters such as ramp slope and stair height.MethodsThis study presents an experimental comparison of eight deep neural network architectures for terrain classification and locomotion parameter estimation. The models are trained on the publicly available CAMARGO 2021 dataset using inertial (IMU) and electromyographic (EMG) signals. Particular attention is given to evaluating the performance of IMU-only inputs versus combined IMU+EMG data, with an emphasis on cost-efficiency and sensor minimization. The tested architectures include LSTM, CNN, and hybrid CNN-LSTM models, among others. Model explainability is assessed via SHAP analysis to guide sensor selection.ResultsIMU-only configurations matched or outperformed those using both IMU and EMG, supporting a more efficient setup. The LSTM model, using only three IMU sensors, achieved high terrain classification accuracy (0.94 ± 0.04) and reliably estimated ramp slopes (1.95 ± 0.58°). The CNN-LSTM architecture demonstrated superior performance in stair height estimation, achieving a accuracy of 15.65 ± 7.40 mm. SHAP analysis confirmed that sensor reduction did not compromise model accuracy.DiscussionThe results highlight the feasibility of using lightweight, IMU-only setups for real-time terrain classification and locomotion parameter estimation. The proposed system achieves an inference time of ~2 ms, making it suitable for real-time wearable robotics applications. This study paves the way for more accessible and deployable solutions in assistive and augmentative lower-limb robotic systems. Code and models are publicly available at: [https://github.com/cosbidev/Human-Locomotion-Identification].]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1569205</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1569205</link>
        <title><![CDATA[Improving IMU based human activity recognition using simulated multimodal representations and a MoE classifier]]></title>
        <pubdate>2025-08-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lala Shakti Swarup Ray</author><author>Qingxin Xia</author><author>Vitor Fortes Rey</author><author>Kaishun Wu</author><author>Paul Lukowicz</author>
        <description><![CDATA[The lack of labeled sensor data for Human Activity Recognition (HAR) has driven researchers to synthesize Inertial Measurement Unit (IMU) data from video, utilizing the rich activity annotations available in video datasets. However, current synthetic IMU data often struggles to capture subtle, fine-grained motions, limiting its effectiveness in real-world HAR applications. To address these limitations, we introduce Multi3Net+, an advanced framework leveraging cross-modal, multitask representations of text, pose, and IMU data. Building on its predecessor, Multi3Net, it uses improved pre-training strategies and a mixture of experts classifier to effectively learn robust joint representations. By leveraging refined contrastive learning across modalities, Multi3Net+ bridges the gap between video and wearable sensor data, enhancing HAR performance for complex, fine-grained activities. Our experiments validate the superiority of Multi3Net+, showing significant improvements in generating high-quality synthetic IMU data and achieving state-of-the-art performance in wearable HAR tasks. These results demonstrate the efficacy of the proposed approach in advancing real-world HAR by combining cross-modal learning with multi-task optimization.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1585632</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1585632</link>
        <title><![CDATA[Mobile application based on KDD to predict high-crime areas and promote sustainability in citizen security in a district of Lima-Perú]]></title>
        <pubdate>2025-08-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Hugo Vega-Huerta</author><author>Javier Vilca Velasquez</author><author>Nicolas Anicama Espinoza</author><author>Gisella Luisa Elena Maquen-Niño</author><author>Luis Guerra-Grados</author><author>Jorge Pantoja-Collantes</author><author>Oscar Benito-Pacheco</author><author>Juan Carlos Lázaro-Guillermo</author><author>Adegundo Camara-Figueroa</author><author>Javier Cabrera-Díaz</author><author>Rubén Gil-Calvo</author><author>Frida López-Córdova</author>
        <description><![CDATA[Metropolitan Lima faces a serious citizen security situation, reflected in high rates of crime and violence in several districts. The development of a mobile application to identify and predict areas of high crime incidence is proposed. Using historical data of criminal incidents and reports registered by users in the application, models capable of predicting the occurrence of crimes in real time are trained. The data mining process follows the KDD methodology, which includes the stages of selection, preprocessing, transformation, data mining, evaluation and knowledge consolidation. Machine learning algorithms, such as Random Forest and Gradient Boosting, were used to make these predictions. Visualization techniques, such as heat maps, were also used to represent crime events and facilitate their understanding by users. The results show an accuracy of 88% for the Random Forest algorithm and 91% for Gradient Boosting in predicting the occurrence of crimes, which demonstrates the effectiveness of machine learning models to improve citizen security in Metropolitan Lima, therefore these findings have significant implications for crime prevention and suggest that the application of these technologies can be fundamental to address security challenges in the city.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1464348</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1464348</link>
        <title><![CDATA[Trusting AI: does uncertainty visualization affect decision-making?]]></title>
        <pubdate>2025-02-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jonatan Reyes</author><author>Anil Ufuk Batmaz</author><author>Marta Kersten-Oertel</author>
        <description><![CDATA[IntroductionDecision-making based on AI can be challenging, especially when considering the uncertainty associated with AI predictions. Visualizing uncertainty in AI refers to techniques that use visual cues to represent the level of confidence or uncertainty in an AI model's outputs, such as predictions or decisions. This study aims to investigate the impact of visualizing uncertainty on decision-making and trust in AI.MethodsWe conducted a user study with 147 participants, utilizing static classic gaming scenarios as a proxy for human-AI collaboration in decision-making. The study measured changes in decisions, trust in AI, and decision-making confidence when uncertainty was visualized in a continuous format compared to a binary output of the AI model.ResultsOur findings indicate that visualizing uncertainty significantly enhances trust in AI for 58% of participants with negative attitudes toward AI. Additionally, 31% of these participants found uncertainty visualization to be useful. The size of the uncertainty visualization was identified as the method that had the most impact on participants' trust in AI and their confidence in their decisions. Furthermore, we observed a strong association between participants' gaming experience and changes in decision-making when uncertainty was visualized, as well as a strong link between trust in AI and individual attitudes toward AI.DiscussionThese results suggest that visualizing uncertainty can improve trust in AI, particularly among individuals with negative attitudes toward AI. The findings also have important implications for the design of human-AI decision-support systems, offering insights into how uncertainty can be visualized to enhance decision-making and user confidence.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1514933</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1514933</link>
        <title><![CDATA[WIMUSim: simulating realistic variabilities in wearable IMUs for human activity recognition]]></title>
        <pubdate>2025-01-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nobuyuki Oishi</author><author>Phil Birch</author><author>Daniel Roggen</author><author>Paula Lago</author>
        <description><![CDATA[IntroductionPhysics simulation has emerged as a promising approach to generate virtual Inertial Measurement Unit (IMU) data, offering a solution to reduce the extensive cost and effort of real-world data collection. However, the fidelity of virtual IMU depends heavily on the quality of the source motion data, which varies with motion capture setups. We hypothesize that improving virtual IMU fidelity is crucial to fully harness the potential of physics simulation for virtual IMU data generation in training Human Activity Recognition (HAR) models.MethodTo investigate this, we introduce WIMUSim, a 6-axis wearable IMU simulation framework designed to accurately parameterize real IMU properties when deployed on people. WIMUSim models IMUs in wearable sensing using four key parameters: Body (skeletal model), Dynamics (movement patterns), Placement (device positioning), and Hardware (IMU characteristics). Using these parameters, WIMUSim simulates virtual IMU through differentiable vector manipulations and quaternion rotations. A key novelty enabled by this approach is the identification of WIMUSim parameters using recorded real IMU data through gradient descent-based optimization, starting from an initial estimate. This process enhances the fidelity of the virtual IMU by optimizing the parameters to closely mimic the recorded IMU data. Adjusting these identified parameters allows us to introduce physically plausible variabilities.ResultsOur fidelity assessment demonstrates that WIMUSim accurately replicates real IMU data with optimized parameters and realistically simulates changes in sensor placement. Evaluations using exercise and locomotion activity datasets confirm that models trained with optimized virtual IMU data perform comparably to those trained with real IMU data. Moreover, we demonstrate the use of WIMUSim for data augmentation through two approaches: Comprehensive Parameter Mixing, which enhances data diversity by varying parameter combinations across subjects, outperforming models trained with real and non-optimized virtual IMU data by 4–10 percentage points (pp); and Personalized Dataset Generation, which customizes augmented datasets to individual user profiles, resulting in average accuracy improvements of 4 pp, with gains exceeding 10 pp for certain subjects.DiscussionThese results underscore the benefit of high-fidelity virtual IMU data and WIMUSim's utility in developing effective data generation strategies, alleviating the challenge of data scarcity in sensor-based HAR.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1525382</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1525382</link>
        <title><![CDATA[Towards an intelligent energy conservation approach for context-aware systems in smart environments]]></title>
        <pubdate>2024-12-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Umar Mahmud</author><author>Shariq Hussain</author><author>Tehmina Karamat</author>
        <description><![CDATA[A smart personal space is a context-aware system that recognizes situations using contextual data. A user interacts within the personal space using smart devices that are mobile, and run-on batteries that have limited power. This paper proposes a Power-Constrained Context-Aware System (PCCA) that uses Markov Chain-based pre-classification to predict context change and defer context processing to conserve energy in an intelligent way. A new Markov Chain Module is added that creates a Markov Chain using history information. This enables PCCA to predict context change for the next observation. The results show that PCCA consumes 37% less power than a context-aware system.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1478851</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1478851</link>
        <title><![CDATA[Detection and monitoring of stress using wearables: a systematic review]]></title>
        <pubdate>2024-12-18T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Anuja Pinge</author><author>Vinaya Gad</author><author>Dheryta Jaisighani</author><author>Surjya Ghosh</author><author>Sougata Sen</author>
        <description><![CDATA[Over the last few years, wearable devices have witnessed immense changes in terms of sensing capabilities. Wearable devices, with their ever-increasing number of sensors, have been instrumental in monitoring human activities, health-related indicators, and overall wellness. One health-related area that has rapidly adopted wearable devices is the mental health monitoring and well-being area, which covers problems such as psychological distress. The continuous monitoring capability of wearable devices allows the detection and monitoring of stress, thus enabling early detection of mental health problems. In this paper, we present a systematic review of the different types of sensors and wearable devices used by researchers to detect and monitor stress in individuals. We identify and detail the tasks such as data collection, data pre-processing, features computation, and training of the model explored by such research works. We review each step involved in stress detection and monitoring. We also discuss the scope and opportunities for further research that deals with the management of stress once it is detected.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1465793</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1465793</link>
        <title><![CDATA[Editorial: Wearable computing, volume II]]></title>
        <pubdate>2024-08-07T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Bo Zhou</author><author>Cheng Zhang</author><author>Bashima Islam</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1379788</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1379788</link>
        <title><![CDATA[A matter of annotation: an empirical study on in situ and self-recall activity annotations from wearable sensors]]></title>
        <pubdate>2024-07-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Alexander Hoelzemann</author><author>Kristof Van Laerhoven</author>
        <description><![CDATA[Research into the detection of human activities from wearable sensors is a highly active field, benefiting numerous applications, from ambulatory monitoring of healthcare patients via fitness coaching to streamlining manual work processes. We present an empirical study that evaluates and contrasts four commonly employed annotation methods in user studies focused on in-the-wild data collection. For both the user-driven, in situ annotations, where participants annotate their activities during the actual recording process, and the recall methods, where participants retrospectively annotate their data at the end of each day, the participants had the flexibility to select their own set of activity classes and corresponding labels. Our study illustrates that different labeling methodologies directly impact the annotations' quality, as well as the capabilities of a deep learning classifier trained with the data. We noticed that in situ methods produce less but more precise labels than recall methods. Furthermore, we combined an activity diary with a visualization tool that enables the participant to inspect and label their activity data. Due to the introduction of such a tool were able to decrease missing annotations and increase the annotation consistency, and therefore the F1-Score of the deep learning model by up to 8% (ranging between 82.1 and 90.4% F1-Score). Furthermore, we discuss the advantages and disadvantages of the methods compared in our study, the biases they could introduce, and the consequences of their usage on human activity recognition studies as well as possible solutions.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1394397</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1394397</link>
        <title><![CDATA[Energy-efficient, low-latency, and non-contact eye blink detection with capacitive sensing]]></title>
        <pubdate>2024-06-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mengxi Liu</author><author>Sizhen Bian</author><author>Zimin Zhao</author><author>Bo Zhou</author><author>Paul Lukowicz</author>
        <description><![CDATA[This work described a novel non-contact, wearable, real-time eye blink detection solution based on capacitive sensing technology. A custom-built prototype employing low-cost and low-power consumption capacitive sensors was integrated into standard glasses, with a copper tape electrode affixed to the frame. The blink of an eye induces a variation in capacitance between the electrode and the eyelid, thereby generating a distinctive capacitance-related signal. By analyzing this signal, eye blink activity can be accurately identified. The effectiveness and reliability of the proposed solution were evaluated through five distinct scenarios involving eight participants. Utilizing a user-dependent detection method with a customized predefined threshold value, an average precision of 92% and a recall of 94% were achieved. Furthermore, an efficient user-independent model based on the two-bit precision decision tree was further applied, yielding an average precision of 80% and an average recall of 81%. These results demonstrate the potential of the proposed technology for real-world applications requiring precise and unobtrusive eye blink detection.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1385392</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1385392</link>
        <title><![CDATA[A magnetometer-based method for in-situ syncing of wearable inertial measurement units]]></title>
        <pubdate>2024-04-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Thomas J. Gilbert</author><author>Zexiao Lin</author><author>Sally Day</author><author>Antonia F. de C. Hamilton</author><author>Jamie A. Ward</author>
        <description><![CDATA[This paper presents a novel method to synchronize multiple wireless inertial measurement unit sensors (IMU) using their onboard magnetometers. The basic method uses an external electromagnetic pulse to create a known event measured by the magnetometer of multiple IMUs and in turn uses this to synchronize the devices. An initial evaluation using four commercial IMUs reveals a maximum error of 40 ms per hour as limited by a 25 Hz sample rate. Building on this we introduce a novel method to improve synchronization beyond the limitations imposed by the sample rate and evaluate this in a further study using 8 IMUs. We show that a sequence of electromagnetic pulses, in total lasting <3-s, can reduce the maximum synchronization error to 8 ms (for 25 Hz sample rate, and accounting for the transient response time of the magnetic field generator). An advantage of this method is that it can be applied to several devices, either simultaneously or individually, without the need to remove them from the context in which they are being used. This makes the approach particularly suited to synchronizing multi-person on-body sensors while they are being worn.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1379925</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1379925</link>
        <title><![CDATA[A comprehensive evaluation of marker-based, markerless methods for loose garment scenarios in varying camera configurations]]></title>
        <pubdate>2024-04-05T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lala Shakti Swarup Ray</author><author>Bo Zhou</author><author>Sungho Suh</author><author>Paul Lukowicz</author>
        <description><![CDATA[In support of smart wearable researchers striving to select optimal ground truth methods for motion capture across a spectrum of loose garment types, we present an extended benchmark named DrapeMoCapBench (DMCB+). This augmented benchmark incorporates a more intricate limb-wise Motion Capture (MoCap) accuracy analysis, and enhanced drape calculation, and introduces a novel benchmarking tool that encompasses multicamera deep learning MoCap methods. DMCB+ is specifically designed to evaluate the performance of both optical marker-based and markerless MoCap techniques, taking into account the challenges posed by various loose garment types. While high-cost marker-based systems are acknowledged for their precision, they often require skin-tight markers on bony areas, which can be impractical with loose garments. On the other hand, markerless MoCap methods driven by computer vision models have evolved to be more cost-effective, utilizing smartphone cameras and exhibiting promising results. Utilizing real-world MoCap datasets, DMCB+ conducts 3D physics simulations with a comprehensive set of variables, including six drape levels, three motion intensities, and six body-gender combinations. The extended benchmark provides a nuanced analysis of advanced marker-based and markerless MoCap techniques, highlighting their strengths and weaknesses across distinct scenarios. In particular, DMCB+ reveals that when evaluating casual loose garments, both marker-based and markerless methods exhibit notable performance degradation (>10 cm). However, in scenarios involving everyday activities with basic and swift motions, markerless MoCap outperforms marker-based alternatives. This positions markerless MoCap as an advantageous and economical choice for wearable studies. The inclusion of a multicamera deep learning MoCap method in the benchmarking tool further expands the scope, allowing researchers to assess the capabilities of cutting-edge technologies in diverse motion capture scenarios.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1365500</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1365500</link>
        <title><![CDATA[Augmenting context with power information for green context-awareness in smart environments]]></title>
        <pubdate>2024-03-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Umar Mahmud</author><author>Shariq Hussain</author>
        <description><![CDATA[The increase in the use of smart devices has led to the realization of the Internet of Everything (IoE). The heart of an IoE environment is a Context-Aware System that facilitates service discovery, delivery, and adaptation based on context classification. The context has been defined in a domain-dependent way, traditionally. The classical models of context have been focused on rich context and lack Cost of Context (CoC) that can be used for decision support. The authors present a philosophy-inspired mathematical model of context that includes confidence in activity classification of context, the actions performed, and the power information. Since a single recurring activity can lead to distinct actions performed at different times, it is better to record the actions. The power information includes the power consumed in the complete context processing and is a quality attribute of the context. Power consumption is a useful metric as CoC and is suitable for power-constrained context awareness. To demonstrate the effectiveness of the proposed work, example contexts are described, and the context model is presented mathematically in this study. The context is aggregated with power information, and actions and confidence on the classification outcome lead to the concept of situational context. The results show that the context gathered through sensor data and deduced through remote services can be made more rich with CoC parameters.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1347424</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1347424</link>
        <title><![CDATA[Where to mount the IMU? Validation of joint angle kinematics and sensor selection for activities of daily living]]></title>
        <pubdate>2024-02-29T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lena Uhlenberg</author><author>Oliver Amft</author>
        <description><![CDATA[We validate the OpenSense framework for IMU-based joint angle estimation and furthermore analyze the framework's ability for sensor selection and optimal positioning during activities of daily living (ADL). Personalized musculoskeletal models were created from anthropometric data of 19 participants. Quaternion coordinates were derived from measured IMU data and served as input to the simulation framework. Six ADLs, involving upper and lower limbs were measured and a total of 26 angles analyzed. We compared the joint kinematics of IMU-based simulations with those of optical marker-based simulations for most important angles per ADL. Additionally, we analyze the influence of sensor count on estimation performance and deviations between joint angles, and derive the best sensor combinations. We report differences in functional range of motion (fRoMD) estimation performance. Results for IMU-based simulations showed MAD, RMSE, and fRoMD of 4.8°, 6.6°, 7.2° for lower limbs and for lower limbs and 9.2°, 11.4°, 13.8° for upper limbs depending on the ADL. Overall, sagittal plane movements (flexion/extension) showed lower median MAD, RMSE, and fRoMD compared to transversal and frontal plane movements (rotations, adduction/abduction). Analysis of sensor selection showed that after three sensors for the lower limbs and four sensors for the complex shoulder joint, the estimation error decreased only marginally. Global optimum (lowest RMSE) was obtained for five to eight sensors depending on the joint angle across all ADLs. The sensor combinations with the minimum count were a subset of the most frequent sensor combinations within a narrowed search space of the 5% lowest error range across all ADLs and participants. Smallest errors were on average < 2° over all joint angles. Our results showed that the open-source OpenSense framework not only serves as a valid tool for realistic representation of joint kinematics and fRoM, but also yields valid results for IMU sensor selection for a comprehensive set of ADLs involving upper and lower limbs. The results can help researchers to determine appropriate sensor positions and sensor configurations without the need for detailed biomechanical knowledge.]]></description>
      </item>
      </channel>
    </rss>