<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in ICT | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/ict</link>
        <description>RSS Feed for Frontiers in ICT | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-07T14:41:04.991+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2020.00001</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2020.00001</link>
        <title><![CDATA[Project Westdrive: Unity City With Self-Driving Cars and Pedestrians for Virtual Reality Studies]]></title>
        <pubdate>2020-01-31T00:00:00Z</pubdate>
        <category>Code</category>
        <author>Farbod N. Nezami</author><author>Maximilian A. Wächter</author><author>Gordon Pipa</author><author>Peter König</author>
        <description><![CDATA[Virtual environments will deeply alter the way we conduct scientific studies on human behavior. Possible applications range from spatial navigation over addressing moral dilemmas in a more natural manner to therapeutic applications for affective disorders. The decisive factor for this broad range of applications is that virtual reality (VR) is able to combine a well-controlled experimental environment together with the ecological validity of the immersion of test subjects. Until now, however, programming such an environment in Unity® requires profound knowledge of C# programming, 3D design, and computer graphics. In order to give interested research groups access to a realistic VR environment which can easily adapt to the varying needs of experiments, we developed a large, open source, scriptable, and modular VR city. It covers an area of 230 hectare, up to 150 self-driving vehicles and 655 active and passive pedestrians and thousands of nature assets to make it both highly dynamic and realistic. Furthermore, the repository presented here contains a stand-alone City AI toolkit for creating avatars and customizing cars. Finally, the package contains code to easily set up VR studies. All main functions are integrated into the graphical user interface of the Unity® Editor to ease the use of the embedded functionalities. In summary, the project named Westdrive is developed to enable research groups to access a state-of-the-art VR environment that is easily adapted to specific needs and allows focus on the respective research question.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00019</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00019</link>
        <title><![CDATA[The Syncopated Energy Algorithm for Rendering Real-Time Tactile Interactions]]></title>
        <pubdate>2019-10-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Fei Tang</author><author>Ryan P. McMahan</author>
        <description><![CDATA[In this paper, we present a novel vibrotactile rendering algorithm for producing real-time tactile interactions suitable for virtual reality applications. The algorithm uses an energy model to produce smooth tactile sensations by continuously recalculating the location of a phantom actuator that represents a virtual touch point. It also employs syncopations in its rendered amplitude to produce artificial perceptual anchors that make the rendered vibrotactile patterns more recognizable. We conducted two studies to compare this Syncopated Energy algorithm to a standard real-time Grid Region algorithm for rendering touch patterns at different vibration amplitudes and frequencies. We found that the Grid Region algorithm afforded better recognition, but that the Syncopated Energy algorithm was perceived to produce smoother patterns at higher amplitudes. Additionally, we found that higher amplitudes afforded better recognition while a moderate amplitude yielded more perceived continuity. We also found that a higher frequency resulted in better recognition for fine-grained tactile sensations and that frequency can affect perceived continuity.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00020</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00020</link>
        <title><![CDATA[Dyadic Interference Leads to Area of Uncertainty During Face-to-Face Cooperative Interception Task]]></title>
        <pubdate>2019-10-29T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Charles Faure</author><author>Annabelle Limballe</author><author>Anthony Sorel</author><author>Théo Perrin</author><author>Benoit Bideau</author><author>Richard Kulpa</author>
        <description><![CDATA[People generally coordinate their action to be more efficient. However, in some cases, interference between them occur, resulting in an inefficient collaboration. For example, if two volleyball players collide while performing a serve reception, they can both miss the ball. The main goal of this study is to explore the way two persons regulate their actions when performing a cooperative task of ball interception, and how interference between them may occur. Starting face to face, twenty-four participants (twelve teams of two) had to physically intercept balls moving down from the roof to the floor of a virtual room. To this end, they controlled a virtual paddle attached to their hand moving along the anterior-posterior axis. No communication was allowed between participants so they had to focus on visual cues to decide if they should perform the interception or leave the partner do it. Participants were immersed in a stereoscopic virtual reality setup that allows the control of the situation and the visual stimuli they perceived, such as ball trajectories and the information available on the partner's motion. Results globally showed participants were often able to intercept balls without collision by dividing the interception space in two equivalent parts. However, an area of uncertainty (where many trials were not intercepted) appeared in the center of the scene, highlighting the presence of interference between participants. The width of this area increased when the situation became more complex (facing a real partner and not a stationary one) and when less information was available (only the paddle and not the partner's avatar). Moreover, participants initiated their interception later when real partner was present and often interpreted balls starting above them as balls they should intercept, even when these balls were in fine intercepted by their partner. Overall, results showed that team coordination here emerges from between-participants interactions and that interference between them depends on task complexity (uncertainty on partner's action and visual information available).]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00018</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00018</link>
        <title><![CDATA[Eyelid and Pupil Landmark Detection and Blink Estimation Based on Deformable Shape Models for Near-Field Infrared Video]]></title>
        <pubdate>2019-10-14T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Siyuan Chen</author><author>Julien Epps</author>
        <description><![CDATA[The eyelid contour, pupil contour, and blink event are important features of eye activity, and their estimation is a crucial research area for emerging wearable camera-based eyewear in a wide range of applications e.g., mental state estimation. Current approaches often estimate a single eye activity, such as blink or pupil center, from far-field and non-infrared (IR) eye images, and often depend on the knowledge of other eye components. This paper presents a unified approach to simultaneously estimate the landmarks for the eyelids, the iris and the pupil, and detect blink from near-field IR eye images based on a statistically learned deformable shape model and local appearance. Unlike the facial landmark estimation problem, by comparison, different shape models are applied to all eye states—closed eye, open eye with iris visible, and open eye with iris and pupil visible—to deal with the self-occluding interactions among the eye components. The most likely eye state is determined based on the learned local appearance. Evaluation on three different realistic datasets demonstrates that the proposed three-state deformable shape model achieves state-of-the-art performance for the open eye with iris and pupil state, where the normalized error was lower than 0.04. Blink detection can be as high as 90% in recall performance, without direct use of pupil detection. Cross-corpus evaluation results show that the proposed method improves on the state-of-the-art eyelid detection algorithm. This unified approach greatly facilitates eye activity analysis for research and practice when different types of eye activity are required rather than employ different techniques for each type. Our work is the first study proposing a unified approach for eye activity estimation from near-field IR eye images and achieved the state-of-the-art eyelid estimation and blink detection performance.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00017</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00017</link>
        <title><![CDATA[Toward Industry 4.0 With IoT: Optimizing Business Processes in an Evolving Manufacturing Factory]]></title>
        <pubdate>2019-08-28T00:00:00Z</pubdate>
        <category>Technology Report</category>
        <author>Laura Belli</author><author>Luca Davoli</author><author>Alice Medioli</author><author>Pier Luigi Marchini</author><author>Gianluigi Ferrari</author>
        <description><![CDATA[Research advances in the last decades have allowed the introduction of Internet of Things (IoT) concepts in several industrial application scenarios, leading to the so-called Industry 4.0 or Industrial IoT (IIoT). The Industry 4.0 has the ambition to revolutionize industry management and business processes, enhancing the productivity of manufacturing technologies through field data collection and analysis, thus creating real-time digital twins of industrial scenarios. Moreover, it is vital for companies to be as “smart” as possible and to adapt to the varying nature of the digital supply chains. This is possible by leveraging IoT in Industry 4.0 scenarios. In this paper, we describe the renovation process, guided by things2i s.r.l., a cross-disciplinary engineering-economic spin-off company of the University of Parma, which a real manufacturing industry is undergoing over consecutive phases spanning a few years. The first phase concerns the digitalization of the control quality process, specifically related to the company's production lines. The use of paper sheets containing different quality checks has been made smarter through the introduction of a digital, smart, and Web-based application, which is currently supporting operators and quality inspectors working on the supply chain through the use of smart devices. The second phase of the IIoT evolution—currently on-going—concerns both digitalization and optimization of the production planning activity, through an innovative Web-based planning tool. The changes introduced have led to significant advantages and improvement for the manufacturing company, in terms of: (i) impressive cost reduction; (ii) better products quality control; (iii) real-time detection and reaction to supply chain issues; (iv) significant reduction of the time spent in planning activity; and (v) resources employment optimization, thanks to the minimization of unproductive setup times on production lines. These two renovation phases represent a basis for possible future developments, such us the integration of sensor-based data on the operational status of production machines and the currently available warehouse supplies. In conclusion, the Industry 4.0-based on-going digitization process guided by things2i allows to continuously collect heterogeneous Human-to-Things (H2T) data, which can be used to optimize the partner manufacturing company as a whole entity.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00016</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00016</link>
        <title><![CDATA[Superimposing 3D Virtual Self + Expert Modeling for Motor Learning: Application to the Throw in American Football]]></title>
        <pubdate>2019-08-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Thibaut Le Naour</author><author>Ludovic Hamon</author><author>Jean-Pierre Bresciani</author>
        <description><![CDATA[We learn and/or relearn motor skills at all ages. Feedback plays a crucial role in this learning process, and Virtual Reality (VR) constitutes a unique tool to provide feedback and improve motor learning. In particular, VR grants the possibility to edit 3D movements and display augmented feedback in real time. Here we combined VR and motion capture to provide learners with a 3D feedback superimposing in real time the reference movements of an expert (expert feedback) to the movements of the learner (self feedback). We assessed the effectiveness of this feedback for the learning of a throwing movement in American football. This feedback was used during (concurrent feedback) and/or after movement execution (delayed feedback), and it was compared with a feedback displaying only the reference movements of the expert. In contrast with more traditional studies relying on video feedback, we used the Dynamic Time Warping algorithm coupled to motion capture to measure the spatial characteristics of the movements. We also assessed the regularity with which the learner reproduced the reference movement along its path. For that, we used a new metric computing the dispersion of distance around the mean distance over time. Our results show that when the movements of the expert were superimposed on the movements of the learner during learning (i.e., self + expert), the reproduction of the reference movement improved significantly. Furthermore, providing feedback about the movements of the expert only did not give rise to any significant improvement regarding movement reproduction.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00015</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00015</link>
        <title><![CDATA[A Composite Structure for Fast Name Prefix Lookup]]></title>
        <pubdate>2019-08-02T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jiawei Hu</author><author>Hui Li</author>
        <description><![CDATA[Name-based forwarding plane is a critical but challenging component for Named Data Networking (NDN), where a hash table is an appealing candidate for data structure utilized in FIB on the benefit of its fast lookup speed. However, the hash table is flawed that it does not naturally support the longest-prefix-matching (LPM) algorithm for name-based forwarding. To support LPM in the hash table, besides the linear lookup, random search (such as binary search) aims at increasing the lookup speed by reconstructing the FIB and optimizing the search path. We propose a composite data structure for random search based on the combination of a hash table and a trie; the latter is introduced to preserve the logical associations among names, so as to recycle memory and prevent the so-called backtracking problem, thus enhancing the lookup efficiency. Our experiment indicates the superiority of our scheme in lookup speed, the impact on memory consumption has also been evaluated.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00014</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00014</link>
        <title><![CDATA[An Interactive and Multimodal Virtual Mind Map for Future Workplace]]></title>
        <pubdate>2019-07-31T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>David Kuťák</author><author>Milan Doležal</author><author>Bojan Kerous</author><author>Zdenek Eichler</author><author>Jiří Vašek</author><author>Fotis Liarokapis</author>
        <description><![CDATA[Traditional types of mind maps involve means of visually organizing information. They can be created either using physical tools like paper or post-it notes or through the computer-mediated process. Although their utility is established, mind maps and associated methods usually have several shortcomings with regards to effective and intuitive interaction as well as effective collaboration. Latest developments in virtual reality demonstrate new capabilities of visual and interactive augmentation, and in this paper, we propose a multimodal virtual reality mind map that has the potential to transform the ways in which people interact, communicate, and share information. The shared virtual space allows users to be located virtually in the same meeting room and participate in an immersive experience. Users of the system can create, modify, and group notes in categories and intuitively interact with them. They can create or modify inputs using voice recognition, interact using virtual reality controllers, and then make posts on the virtual mind map. When a brainstorming session is finished, users are able to vote about the content and export it for later usage. A user evaluation with 32 participants assessed the effectiveness of the virtual mind map and its functionality. Results indicate that this technology has the potential to be adopted in practice in the future, but a comparative study needs to be performed to have a more general conclusion.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00013</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00013</link>
        <title><![CDATA[A Hybrid Solution Method for the Capacitated Vehicle Routing Problem Using a Quantum Annealer]]></title>
        <pubdate>2019-06-25T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sebastian Feld</author><author>Christoph Roch</author><author>Thomas Gabor</author><author>Christian Seidel</author><author>Florian Neukart</author><author>Isabella Galter</author><author>Wolfgang Mauerer</author><author>Claudia Linnhoff-Popien</author>
        <description><![CDATA[The Capacitated Vehicle Routing Problem (CVRP) is an NP-optimization problem (NPO) that has been of great interest for decades for both, science and industry. The CVRP is a variant of the vehicle routing problem characterized by capacity constrained vehicles. The aim is to plan tours for vehicles to supply a given number of customers as efficiently as possible. The problem is the combinatorial explosion of possible solutions, which increases superexponentially with the number of customers. Classical solutions provide good approximations to the globally optimal solution. D-Wave's quantum annealer is a machine designed to solve optimization problems. This machine uses quantum effects to speed up computation time compared to classic computers. The problem on solving the CVRP on the quantum annealer is the particular formulation of the optimization problem. For this, it has to be mapped onto a quadratic unconstrained binary optimization (QUBO) problem. Complex optimization problems such as the CVRP can be translated to smaller subproblems and thus enable a sequential solution of the partitioned problem. This work presents a quantum-classic hybrid solution method for the CVRP. It clarifies whether the implementation of such a method pays off in comparison to existing classical solution methods regarding computation time and solution quality. Several approaches to solving the CVRP are elaborated, the arising problems are discussed, and the results are evaluated in terms of solution quality and computation time.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00011</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00011</link>
        <title><![CDATA[Technology Use and Attitudes in Music Learning]]></title>
        <pubdate>2019-05-31T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>George Waddell</author><author>Aaron Williamon</author>
        <description><![CDATA[While the expansion of technologies into the music education classroom has been studied in great depth, there is a lack of published literature regarding the use of digital technologies by students learning in individual settings. Do musicians take their technology use into the practice room and teaching studio, or does the traditional nature of the master-apprentice teaching model promote different attitudes among musicians toward their use of technology in learning to perform? To investigate these issues, we developed the Technology Use and Attitudes in Music Learning Survey, which included adaptations of Davis's 1989 scales for Perceived Usefulness and Perceived Ease of Use of Technology. Data were collected from an international cohort of 338 amateur, student, and professional musicians ranging widely in age, specialism, and musical experience. Results showed a generally positive attitude toward current and future technology use among musicians and supported the Technology Acceptance Model (TAM), wherein technology use in music learning was predicted by perceived ease of use via perceived usefulness. Musicians' self-rated skills with smartphones, laptops, and desktop computers were found to extend beyond traditional audio and video recording devices, and the majority of musicians reported using classic music technologies (e.g., metronomes and tuners) on smartphones and tablets rather than bespoke devices. Despite this comfort with and access to new technology, availability reported within one-to-one lessons was half of that within practice sessions, and while a large percentage of musicians actively recorded their playing, these recordings were not frequently reviewed. Our results highlight opportunities for technology to take a greater role in improving music learning through enhanced student-teacher interaction and by facilitating self-regulated learning.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00009</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00009</link>
        <title><![CDATA[Understanding Health Information Technologies as Complex Interventions With the Need for Thorough Implementation and Monitoring to Sustain Patient Safety]]></title>
        <pubdate>2019-05-17T00:00:00Z</pubdate>
        <category>Opinion</category>
        <author>Julian Wienert</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00010</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00010</link>
        <title><![CDATA[Application of Machine Learning in a Parkinson's Disease Digital Biomarker Dataset Using Neural Network Construction (NNC) Methodology Discriminates Patient Motor Status]]></title>
        <pubdate>2019-05-14T00:00:00Z</pubdate>
        <category>Brief Research Report</category>
        <author>Ioannis G. Tsoulos</author><author>Georgia Mitsi</author><author>Athanassios Stavrakoudis</author><author>Spyros Papapetropoulos</author>
        <description><![CDATA[Parkinson's disease (PD) patient care is limited by inadequate, sporadic symptom monitoring, infrequent access to care, and sparse encounters with healthcare professionals leading to poor medical decision making and sub-optimal patient health-related outcomes. Recent advances in digital health approaches have enabled objective and remote monitoring of impaired motor function with the promise of profoundly changing the diagnostic, monitoring, and therapeutic landscape in PD. We recently demonstrated that by using a variety of upper limb functional tests iMotor, an artificial intelligence powered, cloud-based digital platform differentiated PD subjects from healthy volunteers (HV). The objective of this paper is to provide preliminary evidence that artificial intelligence systems may allow one to discriminate PD patients from (HV) further and determine different features of the disease within a cohort of PD subjects. The recently introduced Neural Network Construction (NNC) technique was used here to classify data collected by a mobile application (iMotor, Apptomics Inc., Wellesley, MA) into two categories: PD for patients and HV. The method was tested on a series of data previously collected, and the results were compared against more traditional techniques for neural network training. The NNC algorithm discriminated individual PD patients from HVs with 93.11% accuracy and ON vs. OFF state with 76.5% accuracy. Future applications of artificial intelligence-powered digital platforms can enhance clinical care and research by generating rich, reliable, and sensitive datasets that can be used for medical decision-making during and between office visits. Additional artificial intelligence-based studies in larger cohorts of patients are warranted.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00007</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00007</link>
        <title><![CDATA[Preaching Voxels: An Alternative Approach to Mixed Reality]]></title>
        <pubdate>2019-04-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Holger Regenbrecht</author><author>Jung-Woo (Noel) Park</author><author>Claudia Ott</author><author>Steven Mills</author><author>Matthew Cook</author><author>Tobias Langlotz</author>
        <description><![CDATA[For mixed reality applications, where reality and virtual reality are spatially merged and aligned in interactive real-time, we propose a pure voxel representation as a rendering and interaction method of choice. We show that voxels—gap-less volumetric pixels in a regular grid in space—allow for an actual user experience of a mixed reality environment, for a seamless blending of virtual and real as well as for a sense of presence and co-presence in such an environment. If everything is based on voxels, even if coarse, visual coherence is achieved inherently. We argue the case for voxels by (1) conceptually defining and illustrating voxel-based mixed reality, (2) describing the computational feasibility, (3) presenting a fully functioning, low resolution prototype, (4) empirically exploring the user experience, and finally (5) discussing current work and future directions for voxel-based mixed reality. This work is not the first that utilizes voxels for mixed reality, but is the first that uses voxels for all internal, external, and user interface representations as an effective way of experiencing and interacting with mixed reality environments.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00006</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00006</link>
        <title><![CDATA[Corrigendum: Interactive Optimization With Parallel Coordinates: Exploring Multidimensional Spaces for Decision Support]]></title>
        <pubdate>2019-04-09T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Sébastien Cajot</author><author>Nils Schüler</author><author>Markus Peter</author><author>Andreas Koch</author><author>Francois Maréchal</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00005</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00005</link>
        <title><![CDATA[Cyber Security Threats and Challenges in Collaborative Mixed-Reality]]></title>
        <pubdate>2019-04-09T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jassim Happa</author><author>Mashhuda Glencross</author><author>Anthony Steed</author>
        <description><![CDATA[Collaborative Mixed-Reality (CMR) applications are gaining interest in a wide range of areas including games, social interaction, design and health-care. To date, the vast majority of published work has focused on display technology advancements, software, collaboration architectures and applications. However, the potential security concerns that affect collaborative platforms have received limited research attention. In this position paper, we investigate the challenges posed by cyber-security threats to CMR systems. We focus on how typical network architectures facilitating CMR and how their vulnerabilities can be exploited by attackers, and discuss the degree of potential social, monetary impacts, psychological and other harms that may result from such exploits. The main purpose of this paper is to provoke a discussion on CMR security concerns. We highlight insights from a cyber-security threat modelling perspective and also propose potential directions for research and development toward better mitigation strategies. We present a simple, systematic approach to understanding a CMR attack surface through an abstraction-based reasoning framework to identify potential attack vectors. Using this framework, security analysts, engineers, designers and users alike (stakeholders) can identify potential Indicators of Exposures (IoE) and Indicators of Compromise (IoC). Our framework allows stakeholders to reduce their CMR attack surface as well understand how Intrusion Detection System (IDS) approaches can be adopted for CMR systems. To demonstrate the validity to our framework, we illustrate several CMR attack surfaces through a set of use-cases. Finally, we also present a discussion on future directions this line of research should take.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00004</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00004</link>
        <title><![CDATA[The Design of UDOO Boards: Contributing to the Appropriation of Digital Technology]]></title>
        <pubdate>2019-03-18T00:00:00Z</pubdate>
        <category>Technology Report</category>
        <author>Antonio Rizzo</author><author>Maurizio Caporali</author><author>Daniele Conti</author><author>Francesco Montefoschi</author><author>Giovanni Burresi</author><author>Bruno Sinopoli</author>
        <description><![CDATA[The domain of Human-Computer Interaction does not only concern the design of technology that is easy to use, useful, and fancy—it has to do with our role in shaping our environment, our ecological niche that today involves the whole earth. A key concept in the interaction between humans and computing resources is that of appropriation, originally proposed by Aleksei Nikolaevich Leontiev. In the present paper we will first review the concept of appropriation and will present bricolage as a key activity for fostering appropriation. Then we will present the Makers Movement as a socio-cultural movement relevant for the process of appropriation of digital technology. Finally, we will describe our approach and vision in the design of the UDOO, a single board computer, and of a specific developing environment, UAPPI, for enabling the appropriation through meaningful activities of digital technologies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00003</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00003</link>
        <title><![CDATA[Editorial: Active Learning: Theoretical Perspectives, Empirical Studies, and Design Profiles]]></title>
        <pubdate>2019-03-07T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Robert Cassidy</author><author>Elizabeth S. Charles</author><author>James D. Slotta</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00002</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00002</link>
        <title><![CDATA[Cords and Chords: Exploring the Role of E-Textiles in Computational Audio]]></title>
        <pubdate>2019-03-01T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Rebecca Stewart</author>
        <description><![CDATA[Electronic textiles (e-textiles) have played a significant role in computational audio ranging from wearable interfaces for creative expression to more utilitarian purposes such as acoustic monitoring for military applications. This article looks at e-textiles within computational audio from three perspectives: the historical developments of the field; the core enabling technologies; and the primary application areas. It closes with a discussion of what role e-textiles may play in future computational audio systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2019.00001</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2019.00001</link>
        <title><![CDATA[Touchy : A Visual Approach for Simulating Haptic Effects on Touchscreens]]></title>
        <pubdate>2019-02-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Antoine Costes</author><author>Ferran Argelaguet</author><author>Fabien Danieau</author><author>Philippe Guillotel</author><author>Anatole Lécuyer</author>
        <description><![CDATA[Haptic enhancement of touchscreens usually involves vibrating motors producing limited sensations or custom mechanical actuators that are difficult to disseminate. In this paper, we propose an alternative approach called “Touchy,” where a symbolic cursor is introduced under the user's finger, to evoke various haptic properties through changes in its shape and motion. This novel metaphor enables to address four different perceptual dimensions, namely: hardness, friction, fine roughness, and macro roughness. Our metaphor comes with a set of seven visual effects that we compared with real texture samples within a user study conducted with 14 participants. Taken together our results show that Touchy is able to elicit clear and distinct haptic properties: stiffness, roughness, reliefs, stickiness, and slipperiness.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fict.2018.00032</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fict.2018.00032</link>
        <title><![CDATA[Interactive Optimization With Parallel Coordinates: Exploring Multidimensional Spaces for Decision Support]]></title>
        <pubdate>2019-01-14T00:00:00Z</pubdate>
        <category>Methods</category>
        <author>Sébastien Cajot</author><author>Nils Schüler</author><author>Markus Peter</author><author>Andreas Koch</author><author>Francois Maréchal</author>
        <description><![CDATA[Interactive optimization methods are particularly suited for letting human decision makers learn about a problem, while a computer learns about their preferences to generate relevant solutions. For interactive optimization methods to be adopted in practice, computational frameworks are required, which can handle and visualize many objectives simultaneously, provide optimal solutions quickly and representatively, all while remaining simple and intuitive to use and understand by practitioners. Addressing these issues, this work introduces SAGESSE (Systematic Analysis, Generation, Exploration, Steering and Synthesis Experience), a decision support methodology, which relies on interactive multiobjective optimization. Its innovative aspects reside in the combination of (i) parallel coordinates as a means to simultaneously explore and steer the underlying alternative generation process, (ii) a Sobol sequence to efficiently sample the points to explore in the objective space, and (iii) on-the-fly application of multiattribute decision analysis, cluster analysis and other data visualization techniques linked to the parallel coordinates. An illustrative example demonstrates the applicability of the methodology to a large, complex urban planning problem.]]></description>
      </item>
      </channel>
    </rss>