Skip to main content

TECHNOLOGY AND CODE article

Front. Virtual Real., 11 March 2022
Sec. Technologies for VR
Volume 3 - 2022 | https://doi.org/10.3389/frvir.2022.781218

emteqPRO—Fully Integrated Biometric Sensing Array for Non-Invasive Biomedical Research in Virtual Reality

  • 1Centre for Digital Entertainment, Faculty of Media and Communication, Bournemouth University, Poole, United Kingdom
  • 2Emteq Labs, Sussex Innovation Centre, Brighton, United Kingdom
  • 3Department of Psychiatry, University of Southampton, Southampton, United Kingdom
  • 4Department of Psychology, College of Science, Northeastern University, Boston, MA, United States
  • 5Department of Psychology, Faculty of Science an Technology, Interdisciplinary Neuroscience Research Centre, Bournemouth University, Poole, United Kingdom
  • 6Department of Information and Communication Systems Engineering, University of the Aegean, Samos, Greece
  • 7Department of Computing and Informatics, Faculty of Science an Technology, Interdisciplinary Neuroscience Research Centre, Bournemouth University, Poole, United Kingdom
  • 8School of Sport and Health Sciences, University of Brighton, Brighton, United Kingdom

Virtual Reality (VR) enables the simulation of ecologically validated scenarios, which are ideal for studying behaviour in controllable conditions. Physiological measures captured in these studies provide a deeper insight into how an individual responds to a given scenario. However, the combination of the various biosensing devices presents several challenges, such as efficient time synchronisation between multiple devices, replication between participants and settings, as well as managing cumbersome setups. Additionally, important salient facial information is typically covered by the VR headset, requiring a different approach to facial muscle measurement. These challenges can restrict the use of these devices in laboratory settings. This paper describes a solution to this problem. More specifically, we introduce the emteqPRO system which provides an all-in-one solution for the collection of physiological data through a multi-sensor array built into the VR headset. EmteqPRO is a ready to use, flexible sensor platform enabling convenient, heterogenous, and multimodal emotional research in VR. It enables the capture of facial muscle activations, heart rate features, skin impedance, and movement data—important factors for the study of emotion and behaviour. The platform provides researchers with the ability to monitor data from users in real-time, in co-located and remote set-ups, and to detect activations in physiology that are linked to arousal and valence changes. The SDK (Software Development Kit), developed specifically for the Unity game engine enables easy integration of the emteqPRO features into VR environments.

Code available at: (https://github.com/emteqlabs/emteqvr-unity/releases)

1 Introduction

Emotions encompass our everyday interactions, our decisions, and how we react to others (“What We Talk About When We Talk About Emotions”, 2016). Emotional/affective internal states, physiological and behavioural responses can be triggered by objects, events, and situations. They guide our everyday cognition and behaviour. Hence, measuring these affective responses is key to understanding human behaviour and mental health (Gu et al., 2019). In this paper, we will 1) review the literature regarding emotion recognition and multimodal systems, 2) explain how the newly developed emteqPRO system can accurately and objectively measure data relevant to all dimensions of affect detection in a flexible and convenient way, and 3) demonstrate that emteqPRO is a unique technology that can play a pivotal role in emotion research.

According to the dimensional model of affect: affect is comprised of two major dimensions (Russell, 2003), valence and arousal. Valence is the dimension of affective states that describes the degree to which an affective response ranges from positive to negative (pleasant to unpleasant) (Shuman et al., 2013). The face is understood not only to be the richest source of valence information but also of affect as a whole (Cacioppo et al., 1988; Cacioppo et al., 1986; Magnée et al., 2007; Zhang et al., 2011; Tan et al., 2016) This is reflected in changes in facial muscle activation in response to an affective stimulus. For this reason, many research studies assess valence levels by recording facial muscle activations and configurations, mainly through the use of Electromyographic (EMG) and computer-vision methods. However, the most informative parts of the face are covered by commercial virtual reality (VR) headsets (Mavridou et al., 2018a) and, therefore, conventional methods for facial expression recognition cannot be easily applied in VR.

Arousal (Thayer, 1978) describes the level of physiological intensity that pertains to affective states and can be measured from physiological changes such as in skin conductance (Alexandratos et al., 2014), heart rate, and heart-rate dynamics (Li et al., 2020), such as heart-rate variability, eye tracking measures such as pupil dilation (Wang et al., 2018), as well as head movement (Cig et al., 2010). Other measures such as physical movement measured with motion tracking (Dirican and Göktürk, 2012) or vocal responses measured in audio recordings (Weninger et al., 2013) can be used to measure approach and avoidance behaviours, (Corr, 2013). These behaviours are typically inferred by action tendencies towards a stimulus source.

Academic and industrial researchers have identified the benefits of multimodal sensing platforms, particularly wearable systems where physiological data can be captured directly in real-time. For example, athletes are continually seeking new technologies to gain a competitive edge through the analysis of biomarkers to help them guide their workout and recovery (Seshadri et al., 2019). Health monitoring systems where many health aspects can be recorded simultaneously are also very popular (Guerreiro et al., 2013; Al-Rawhani et al., 2020) some targetting specific areas such as knee joint health (Teague et al., 2020), and for rehabilitation programs (Araujo et al., 2014). At times these systems use machine learning as an automatic detection tool, handling simultaneous multimodal data streams. Models developed using this approach can be used to translate physiological input into more meaningful predictions such as the possibility of injury or likelihood of the development of a certain medical condition (Raghupathi and Raghupathi 2014).

In recent years there has been an increased interest in sensor-enabled VR technologies (see Figure 1).

FIGURE 1
www.frontiersin.org

FIGURE 1. Several academic publications over the last 20 years that have included in the title, abstract or keywords: “Virtual Reality” or “VR” and “EMG” (and/or “electromyography”, “PPG”, “photoplethysmography”). Search made using Scopus.

VR as an immersive technology medium enables the developer to dictate the immersive experience of the user, while providing fine-grain control of what is seen, heard, and the way users can interact with the content. VR can provide high ecological validity (Menshikova et al., 2020) which can, in turn, induce naturalistic responses in users compared to traditional laboratory-like environments (Martens et al., 2019). Additionally, it enables researchers to answer complex research questions that would otherwise be unsafe or unfeasible to be tested under controlled conditions (Fertlemanand et al., 2018; Freeman et al., 2018; Siang et al., 2018). A large body of research using VR technologies utilises portable head-mounted displays (HMD) or VR headsets, which allow for head and body movement within the virtual space, content interaction via input controllers, and gaze/eye-tracking.

VR provides fine-grain control of the wearers’ external experience within the virtual content but a window into the wearers’ internal experience remains closed. This is due to technical limitations that have prevented VR from reliably recording the cardinal features of emotion described above. These limitations stem primarily from the design of the VR headsets themselves, which cover a large part of the human face and, critically, the facial muscles responsible for naturalistic expression (Mavridou et al., 2019a; Mavridou et al., 2018b). Another difficulty is setting up multisensory VR studies. Multisensory affective state recordings are suggested to be more reliable than unimodal approaches providing superior accurate results (D’Mello and Kory, 2012). Synchronising multiple peripheral devices in VR settings is challenging, and thus studies combining VR with biometrics often rely on data obtained from a limited range of sensors.

With the increased popularity and interest of affect-sensing capabilities in VR, there is a need for a specialised system for VR settings that can be used to measure both dimensions of affect equally well. A platform approach, with multimodal affect sensing capability integrated into a headset and supporting software, is suggested to address the need for heterogeneous sensing capabilities in emotion research. Simultaneously, such an approach can minimise the complexity of the hardware and expertise required to conduct research studies.

Here, we describe the emteqPRO system: the first fully integrated sensing platform dedicated to emotion research using HMD-enabled VR technologies that allows for objective measurement of each of the components discussed (valence, arousal, approach-avoidance). EmteqPRO alleviates the problem of lack of access to facial movement in VR through integrated sensors and thus enables affect detection through facial EMG (Boxtel Anton, 2010a), something not possible with conventional headsets. EmteqPro provides a wide range of functionalities as part of the overall platform. Researchers can either develop their signal processing methods using raw sensor data or use processed outputs and extracted signal features such as beats per minute, heart rate, and EMG signal amplitude per channel. The flexible and ready-to-use nature of the platform minimises the effort required to set up, collect, and analyse data significantly cutting the costs and time required. EmteqPro is now a commercial product1, currently available through a beta scheme to selected businesses and organisations.

2 Related Work

Researchers have recently started using existing sensor technologies to measure behaviour and responses in VR. These studies mostly involve attaching a large number of external and separate sensor modules on the user, to collect the data required to train various classifiers (Girardi et al., 2018). Various sensors are often used to measure different features and infer psychological or behavioural constructs (see Marín-Morales et al., 2020 for review). The cumbrous nature of such setups introduces extra effort required in signal synchronisation between devices, hinders reproducibility between users due to multistage, error-prone process of putting on all equipment, can be quite expensive when the cost of all devices is added up and has no future outside of lab settings due to low practicality. Given the increased popularity of VR platforms, problems with existing approaches and a clear direction of focusing on capturing physiological data to measure biological reactions, several organisations have started showing interest in developing their own solutions. As a result, while the concept of measuring physiological data in VR to derive user responses is not novel, a platform that combines multiple sensors, synchronises data, and provides tools for implementation directly within the experience certainly is. Figure 2 is a compiled list of some of the existing solutions that have emerged over the last few years in response to the need for multisensory platforms for the collection of biofeedback in VR environments.

FIGURE 2
www.frontiersin.org

FIGURE 2. List of relevant sensing systems used for collection of biofeedback in virtual reality. Sensor, software, and hardware differences are highlighted between each platform for a straightforward comparison of each device.

Each device combines a different set of sensors in addition to a varied level of support and software offerings. For example, HP’s Omnicept headset provides eye-tracking, pupillometry, heart rate, and a face camera built into the inside of the headset (“HP Reverb G2 Omnicept Edition | HP® Official Site”, 2021). HP also advertises a feature capable of determining cognitive load exerted, derived from collected measures (Siegel et al., 2021). No raw data is made available to researchers, restricting users to features detected and made accessible through HP’s proprietary algorithms. Another example is the Psious Headset. The team behind this headset has created VR therapies designed to help people overcome phobias and improve mental health. Healthcare use cases are increasingly amongst the most interesting applications of biofeedback in VR. Collected data can be used to objectively measure various features to assess clinical severity in patients and the public alike. VR therapies have been found to be successful in treating patients and collecting biofeedback data can help in determining the progress and effectiveness of treatments for phobias (Suyo and Núñez-Torres, 2016) or other mental health conditions such as PTSD (Zhang, 2020). Psious’ biggest drawback is the unimodal approach where the only measure collected is electrodermal activity (EDA). While extremely useful for arousal detection, EDA by itself cannot paint a full picture of what the user is experiencing and feeling (D’Mello and Kory, 2015). Looxid Labs have decided to go down a different route and designed a headset that focuses on capturing electroencephalogram (EEG) data. Their design includes several forehead sensors that capture portions of the electrical activity originating in the brain when it propagates to the surface of the skin. The potential of EEG data is undoubtedly huge but the ability to collect reliable EEG data through dry electrodes with the signal originating deep inside a human skull can prove very tricky. Finally, OpenBCI’s project Galea, not listed in this figure promises to combine most of the features utilised by the aforementioned devices but is very early in the development stage and not much information has been made public at this stage.

Biometric data collected in VR has immense potential outside of healthcare applications too. Many researchers predict that v-commerce, a new term describing stores existing exclusively in digital space, similar to websites but designed specifically for virtual reality are the future of commerce (Martínez-Navarro et al., 2019). The ability to record features such as movement or eye tracking can help identify consumer behaviour and patterns to improve the experience. Since it is much easier and cheaper to create and modify digital spaces, using virtual reality as a tool to test concepts in architecture or urban planning while combining it with the collection of features such as stress or cognitive load can help design more efficient and pleasant environments that are easier to navigate (Zeile and Resch, 2018). VR technologies are actively used in design, evaluation, and training processes across multiple industries and disciplines (Berg and Vance, 2016), travel planning or more social use cases such as virtual meetings and family gatherings (Du et al., 2019).

Emteq Labs has created the emteqPro platform to address the need for a more user-friendly, all in one device that combines a collection of various features and helps automate data processing. This will enable researchers and industry to quantify and objectively measure behaviour in healthcare interventions, entertainment, retail, and research applications by providing an additional layer of informative data and thus, an understanding of how and why people respond, react and behave the way they do.

3 EmteqPRO Hardware

The emteqPRO system consists of a wired VR sensor mask insert (see Figure 3) for use with a variety of VR headsets; for example, the HTC Vive Pro shown in Figure 4A for lab use and the Pico G2/G2 4k shown in Figure 4B for remote data collection. Both models are an improved version of the Faceteq and the EmteqVR prototypes our teams developed in the past (Mavridou et al., 2017b; Mavridou et al., 2019b). The emteqPRO system provides kinematic, physiological, and ambient sensing capabilities to measure the two affect dimensions arousal and valence. The outer surface of the insert consists of a soft silicone layer used for comfort and to maintain good contact with the skin.

FIGURE 3
www.frontiersin.org

FIGURE 3. EmteqPRO biosensing insert ($180\times120\times77$mm).

FIGURE 4
www.frontiersin.org

FIGURE 4. EmteqPRO inserts are mounted onto two different headsets. (A) HTC Vive Pro headset. The same inserts are also compatible with the HTC Vive Pro Eye model (B) Pico G2 4 k all-in-one headset. The same inserts are also compatible with the regular G2, and Neo 2 Eye models.

If desired, it is feasible to use emteqPRO without the VR headset (open facemask). The insert can be simply taken off the head-mounted VR display and put on a face with the ability to see one’s surroundings for real-life studies. This setup can be appealing to studies exploring the differences between VR and the real world.

The electrical components inside the device are contained within a lightweight, protective polyurethane and an SLS nylon interface to attach the insert to the headset. The emteqPRO mask weighs 180 g, replacing the foam insert from the headset. In the case of the VIVE headset, where the foam was found to weigh 24 g, emteqPRO insert results in an additional weight of approximately 156 g. The average power consumption is 200 mWh for the PICO headset and 800 mWh for the Vive version, with the device running on 5 V DC provided via an included USB 2.0 type cable connected to the headset.

EmteqPRO has been certified to comply with EU directives (CE mark): Radio Equipment Directive 2014/53/EU, RoHS Directive 2011/65/EU and following harmonised European standards: EN 300 328 v2.1.1 (2016-11), EN 301 489-17 v3.2.0 (2017-03), EN 60601-1:2006 + A12:2014. These certifications affirm the conformity of the emteqPRO system with the European health, safety and environment standards allowing for research to be done safely.

EmteqPRO provides the necessary hardware for researchers to study affect in VR through its integrated sensors that capture and synchronise physiological measures important for affect detection. Unique integration of sensors into the HMD ensures access to previously inaccessible in VR, facial movement data, vital to the detection of valence. Facial movement, in combination with heart rate and movement data, gives a much better picture of users’ affective states than any of the sensors individually, providing a more complete insight into what is felt and experienced. Real-time signal quality measures described below can be used to ensure data with low noise to signal ratio is captured. Multiple validation studies (Fatoorechi et al., 2017; Mavridou et al., 2017c; Mavridou et al., 2018a; Mavridou et al., 2019b; Governo et al., 2020; Gnacek et al., 2020; Lou et al., 2020) and white papers (emteqlabs, 2020a, 2020b, 2021) have already been published to showcase emteqPRO’s viability and potential use across a wide spectrum of applications.

3.1 Dry EMG Sensors

Seven facial dry, stainless steel electromyographic (EMG) electrodes for muscle activity detection are integrated into the emteqPro mask. Traditionally, EMG sensors require adhesives, skin preparation and the use of electrode conductive gel. Our dry EMG technology setup avoids these steps, enabling rapid setup with a 24-bit signal resolution, 2,000 samples/sec with no inter-sensor latency and a signal bandwidth of DC-500 Hz. Each dry electrode unit has been designed to provide minimal distortion and the following characteristics were important in achieving a very high signal-to-noise ratio.

3.1.1 Differential Amplification

High measurement sensitivity of electrical muscle activity is achieved by using a differential electrode pair to reduce common-mode noise. This also allows for localising the measurement and decreasing signal leakage from other muscles in close proximity.

3.1.2 Input Impedance

The source impedance at the junction of the skin and detection surface may range from several thousand ohms to several megaohms for dry skin using dry electrode technology. To prevent attenuation and distortion of the detected signal due to input loading, the input impedance of the differential amplifier is required to be larger than the expected skin impedance in terms of orders of magnitude, without causing ancillary complications to the workings of the differential amplifier. In addition to the magnitude of the input impedance, the balance between the impedances of the two detection sites is also of great importance. For this reason, in selecting the positioning of our electrodes on the face, we have chosen locations and orientations of specific muscles groups of interest (Figure 5).

FIGURE 5
www.frontiersin.org

FIGURE 5. Mapping between emteqPRO sensors and facial muscles.

3.1.3 Active Electrode Design and Stability

The requirement for a high input impedance introduces a problem known as capacitive coupling at the input of the differential amplifier. A small capacitance between the wires leading to the input of the differential amplifier and the power line will introduce a power line noise signal into the amplifier. This phenomenon is similar to the one where the television signal strength increases when one places one’s hand near the antenna input but does not touch it. The solution is to shield the input electrodes and their connections to the ADC (Analog-to-digital converter chip) by actively cancelling the interference using a driven right leg circuit design (Winter and Winter, 1983). Therefore, any external noise from the output of the electrode will not generate significant noise signals in the amplifier.

3.1.4 Electrode Positioning

Dry EMG electrodes have been positioned to overlap the zygomatic, corrugator, frontalis, and orbicularis muscles (Figure 5). The muscle groups are central to measuring markers of objective valence responses to stimuli (Cacioppo et al., 1986, 1988; Magnée et al., 2007; Tan et al., 2016; Zhang et al., 2011).

3.1.5 Lift Detection (From the Skin)

Applying an excitation signal to the skin ensures that the contact between the skin and the EMG sensor is continuously monitored and logged. A small alternating current (AC—24 nA) is generated internally by an ADC chip onboard the emteqPRO device. This current is applied to the skin resulting in an AC voltage travelling through the skin and being detected by the EMG electrodes. If this signal is not detected by the EMG electrodes, that means the sensors are not in contact with the skin. The amplitude of this signal is related to the contact quality between the skin and the electrodes (a stronger signal means better skin contact). This approach of measuring sensors/skin fit has the added benefit of being able to measure changes in impedance. Impedance measures the resistive properties of the circuit in ohms. In the case of emteqPRO, this is a measure of how easily the current travels through the skin. This signal can be additionally used to record electrodermal skin conductance in response to a stimulus that is associated with physiological arousal (Critchley, 2002; van Dooren et al., 2012). The frequency of this AC signal is always set to be a quarter of the sampling rate of the device. The maximum frequency is 2,000 Hz, leaving the AC lift signal running at 500 Hz. This brings the AC lift signal just outside of the desired frequency range when looking at facial EMG signal, which is defined to be within the 20–500 Hz range (Boxtel Anton, 2010a) minimizing any interference with the EMG data recording and analysis.

3.1.6 Variable Insert Sizes

With skin–sensor contact being so critical to recording good quality data, it would be detrimental to take a one-size-fits-all approach for emteqPRO sensor inserts. Human faces can vary greatly in size and shape and effort needs to be made to ensure the correct fit. Two sets of easily interchangeable inserts for wide and narrow faces were manufactured to elevate the problem of variability in face size and shape. This approach provides a reliable fit for most faces, but should additional inserts be required (for example for children) they can be easily ordered, manufactured, and integrated into existing systems.

3.2 PPG Sensor

The PPG sensor (by Osram Opto Semiconductors) used in this device records at a fixed rate of 2,000 Hz. Its green LED measures the expansion and contraction of capillaries based on blood volume changes. PPG signal capture using green light (530 nm wavelength) has been found to be a more suitable alternative to other commonly used LEDs (red—645 nm and blue—470 nm), particularly in high movement scenarios (Lee et al., 2013). The voltage signal from a PPG sensor is proportional to the quantity of blood flowing through capillaries. Traditionally PPG sensors have been placed on either the finger or wrist, given high capillary density contributing to greater sensitivity and a greater signal-to-noise ratio. The emteqPRO has a single PPG sensor centred on the forehead. Nailfold capillaroscopy of the forehead region has shown similar capillary density to the finger and forearm (Toll, 2020). A density of capillary coverage makes the forehead well-suited as a placement location (Branson and Mannheimer, 2004) and can be used to detect changes to blood flow (Otsuka et al., 2017) and accurately measure heart rate features such as beats per minute (Gnacek et al., 2020) to reflect changes in arousal. Moreover, a proximity sensor is used to measure the amount of ambient incident light that reaches the sensor. Fluctuations in this signal can be used to help identify segments of unreliable PPG data when the sensor is not making sufficient contact with the skin (as shown in Figure 6A and Figure 6B).

FIGURE 6
www.frontiersin.org

FIGURE 6. Demonstration of how PPG proximity sensor can be used for identifying noise in PPG signal (A) EmteqPro was placed on the face and subsequently taken off. We can see changes in the light reaching the sensor and the loss of the PPG raw signal (B) EmteqPro was placed on the face and subsequently moved to create movement artefacts. Reduction in PPG signal quality can be observed when during fluctuations recorded by the proximity sensor.

3.3 Inertial Measurement Unit Sensors

The emteqPRO platform has a 6-axis MEMS Motion Tracking device that combines a 3-axis gyroscope and a 3-axis accelerometer. This provides information about movement and posture. The fixed orientation of the insert ensures the correct orientation of the sensors for X/Y/Z measurements. In addition, a magnetometer (by STMicroelectronics) is incorporated into the insert and uses a triaxial magneto-resistive sensor designed for low field magnetic sensing. This provides applications with the direction and magnitude of the Earth’s magnetic fields. Accelerometer, gyroscope, and magnetometer do not require calibration before use. All three sensors record data at a fixed rate of 50 Hz.

4 EmteqPRO Technical Architecture

The emteqPRO platform was built with flexibility in mind, i.e., allowing for access to raw sensor data and/or allowing automated signal processing in real-time specifically designed for reducing post-processing efforts. This is achieved through a combination of firmware and software solutions provided with the device.

The emteqPRO architecture overview is shown in Figure 7, depicting the interaction between various elements of the platform. Firmware installed on the device provides data collection and signal processing capabilities. The emteqVR Software Development Kit (SDK) for the Unity game engine is an optional feature for researchers wishing to design their own custom VR experiments. This SDK allows to trigger data collection, stream sensor and processed data in the VR application in real-time (e.g., for adaptive control and signal quality monitoring) and add event markers (custom timestamped events used for synchronisation of physiological data with user actions).

FIGURE 7
www.frontiersin.org

FIGURE 7. emteqPRO software architecture.

Depending on the desired target platform, emteqPRO functionality can be integrated into windows applications (via executable file) or headsets running the Android operating system (via “.apk” file). Alternatively, data collection and real-time monitoring can be triggered from within the “SuperVision” application, which is a signal monitoring web-based application. If using the VIVE model with a PC connection, data output files can be accessed through local file storage. In the mobile version of the device, data can also be uploaded to secure cloud storage, given a stable WiFi connection.

4.1 Firmware

Firmware is be installed directly onto the device and is responsible for the correct functioning of the mask and all its components. Firmware runs on an onboard high-performance micro-controller. It provides a real-time measurement for the following sensor data: 1) all seven pairs of the EMG sensors, 2) lift detection/skin contact for all seven EMG sensors, 3) PPG signal/proximity, and 4) inertial measurements for linear and rotational acceleration. Data synchronisation is provided as well as the inclusion of timestamps (UNIX timestamps in milliseconds for all data streams and events) for synchronisation with any third party devices.

The following output data are provided: 1) heart rate feature extraction computes heart rate in beats per minute, heart rate variability metrics such as RMSSD, SDNND and RR intervals, and 2) raw data. EmteqPRO’s firmware handles all communication between the insert and a host device (PC or Android) via a USB cable or LAN network.

4.2 SuperVision Application

The SuperVision application (https://github.com/emteqlabs/supervision/releases) is designed to help researchers with the data collection and initial quality assessment. SuperVision is currently a 64bit, user-facing Windows 10 compatible application created to enable researchers to control, record and monitor the progress of data collection. The next version of the application will be web-based. The SuperVision application provides a customisable layout where researchers can view any combination of available data streams in real-time as shown in Figure 8.

FIGURE 8
www.frontiersin.org

FIGURE 8. SuperVision main interface. Customisable layout, showing three selected data streams for 2 facial expressions: Experience Markers (custom, user-defined events from the application), overlayed EMG for all seven channels (any combination of channels can be shown/hidden by clicking on appropriate checkboxes) and PPG (proximity and raw signal).

The mask overlay illustrated in Figure 9 can be used to determine if all sensors are working correctly, with impedance values displayed through a colour-coded system. This can be used to ensure easy monitoring of sensor fit throughout data recording, with any change in sensor contact automatically timestamped and logged in outputted data.

FIGURE 9
www.frontiersin.org

FIGURE 9. SuperVision mask diagram displaying EMG signal quality. Green (good signal), Orange (weak signal), Grey (no contact).

4.3 Unity SDK

EmteqPRO comes with an SDK (https://github.com/emteqlabs/emteqvr-unity/releases) for a popular game engine—Unity (“Unity Real-Time Development Platform | 3D, 2D VR & AR Engine”, 2020). This SDK enables the biometric measurements included in the emteqPRO platform to be incorporated easily into custom VR applications. Development tools, specifically created to make the development and integration of emteqPRO capabilities into custom VR environments easier, are also included in the SDK such as demonstration environments, showcasing practical uses of the device.

A pre-made calibration scene Figure 10 can be easily incorporated into custom applications, ideally before recording any data to ensure the correct fit of the mask and signal quality. Fit can be adjusted based on the real-time data such as lift detection and quality measures used to display a visual representation of sensor-skin contact per individual sensor. These are derived directly from the skin impedance calculated from the EMG sensors. The calibration includes two consecutive steps. The first one is the baseline calibration, consisting of a 2-min-long recording. During this step, the user is asked to relax and breathe normally, avoiding any facial expressions and head movements. The second step is the EMG calibration, within which users are asked to perform several voluntary facial expressions. The data recorded during the three voluntary facial expressions (which target the facial muscles underlying the skin when the EMG sensors are positioned) are used to inform the subject-specific normalisation method, following the process of utilising maximum voluntary muscle contractions for the normalisation of EMG signals suggested by (Fridlund et al., 1984; Boxtel AntonVan, 2010b).

FIGURE 10
www.frontiersin.org

FIGURE 10. emteqPRO Unity demo SDK calibration scene where users are asked to repeat neutral and maximum smile expressions.

All other data streams included with the emteqPRO such as raw and filtered data for all sensors and heart rate features can also be assessed via SDK in real-time. Although data acquisition for some sensors has a high sampling rate (up to 2,000 Hz), it is often impractical and unnecessary to retrieve data at this sampling rate. Hence, the system allows developers to request data at any appropriate frequency that matches their needs. More specifically, the SDK provides access to the most recent (last value) data for each stream whenever that information is requested by an appropriate function call.

The other important and unique aspect of the emteqPRO SDK is the custom event-marking system. It lets developers mark any events within the VR experience with a custom data payload or a message. This message is saved alongside a system UNIX timestamp (with millisecond accuracy) which can be used to synchronise the events with the physiological data. This unique event system consists of three parts, data point, data section and metadata. The data point is the label of the event. A data section is a labelled period of time that happens during a session. Both data point events and data section events can be saved alongside metadata information, including the object’s or event’s properties, such as colour.

5 EmteqPro Data Streams

Table 1 below lists all data streams that are recorded with the emteqPRO system. This data is available for every session and can either be recorded locally or downloaded from cloud storage for remote sessions. Generated files can be easily converted into CSV (comma-separated values) format for import and analysis by software of choice i.e., python, Matlab etc.

TABLE 1
www.frontiersin.org

TABLE 1. List of all available data streams recorded with the emteqPro.

5.1 Conversion to Normalised Measurements

All the data streams available in emteqPRO are stored in 16-bits unsigned integers. The capacity of this value ranges from 0 to +65,535. As a lot of the data streams go outside of that range, a smaller portion of the real value is stored, and an appropriate scalar can be applied to produce normalised measurements in the specified unit. This can be done by dividing the values with the scalar property values stored within the CSV file for each data stream (Table 2 lists scalar values and the resulting units). Users can also automatically convert the raw data streams to normalised unity measurements using the dab2csv normalised converter software provided.

TABLE 2
www.frontiersin.org

TABLE 2. Units, names and scalar values for normalised properties.

6 Sensor Data

To demonstrate the capabilities of the system and the discussed features, sample data was collected to showcase various examples of signals and interpreted results.

6.1 PPG Signal

To demonstrate the PPG signal and PPG derived features, one user was asked to put on the emteqPro (Pico version) and perform a simple exercise to raise the heart rate. This included 60 s, standing rest period, followed by a 30-s wall sit, followed by another 60 s standing rest period. Figure 11 shows an extract of raw PPG data from that data recording.

FIGURE 11
www.frontiersin.org

FIGURE 11. Sample raw PPG signal collected with visible peaks and morphology used for further processing and extraction of features such as heart rate, heart rate variability, breathing rate.

6.2 Heart Rate/Average

Firmware onboard the emteqPRO can calculate real-time heart rate (HR) based on the PPG signal from the device. The average heart rate extracted from the PPG signal discussed in the section above can be seen in Figure 12. An expected rise in HR following strenuous physical activity can be observed with the HR slowly returning to the baseline during rest.

FIGURE 12
www.frontiersin.org

FIGURE 12. Average heart rate extracted from the PPG signal during rest, wall sits and rest. A clear increase in heart rate can be observed shortly after and during exercise. Heart rate continues to climb after exercise is finished due to a 30 s time window over which the average is calculated. Signals shown in this figure can be downloaded from Supplementary Datasheet S5.

6.3 EMG/Expressions

To showcase the collection of EMG signals and its ability to distinguish between expressions, a participant was asked to perform three voluntary expressions, maximum smile, maximum frown or eyebrow squeeze, and a surprise or eyebrow raise. Figure 13 shows plotted EMG signal for all three expressions for each of the seven facial muscles. Activation of specific muscles can be observed for different expressions allowing for easy distinction between facial expressions.

FIGURE 13
www.frontiersin.org

FIGURE 13. EMG signal depicting facial muscle activation of individual muscles when performing smile, frown and surprise expressions. Signals shown in this figure can be downloaded from Supplementary Datasheets S1–S3.

6.4 IMU/Movement Tracking

An Inertial Measurement Unit (IMU), consisting of an accelerometer, a gyroscope, and a magnetometer, is included in the emteqPro. This sensor is used to track user movement and help filter the remaining sensor signals from motion artefacts. IMU data were recorded from one participant using the emteqPRO. The participant was asked to perform four quick movements (left, right, up and down) in succession with looking straight ahead between each movement. The raw accelerometer and gyroscope signal is shown in Figures 14A,B respectively.

FIGURE 14
www.frontiersin.org

FIGURE 14. Data from IMU sensor for x,y and z-axis. (A) accelerometer measuring acceleration. (B) gyroscope measuring angular. (C) Data from magnetometer measuring strength and direction of the magnetic field. For A and B, the participant was asked to turn their head in each of the four directions (left, right, up and down). For C, the participant was asked to slowly look down and up.

Slow, gradual movements such as head tilts are not easily visible on the accelerometer and gyroscope. This is where data from the magnetometer is necessary as it measures the strength and direction of the magnetic field which changes in response to the orientation of the device. Figure 14C shows magnetometer data where a participant was asked to slowly look down and up. Clear changes in the signal can be observed which can be used to identify movements patterns.

7 Conclusion

VR technologies have been established as a key research tool due to their potential for high ecological validity (Menshikova et al., 2020). To provide physiological sensing capabilities in VR settings, our lab created the next generation prototype of “faceteq,” to the novel system showcased in this paper, the emteqPRO. The emteqPRO is an extremely flexible sensor platform that enables convenient, heterogeneous, sensor-enabled VR research. In effect, emteqPRO can be integrated within commercial VR headsets. It provides high-resolution kinematic, f-EMG, PPG derived data for psychophysiological and behavioural research in and outside VR. The system was developed using dry f-EMG sensors, and additional software applications were built to support researchers and practitioners. Thus, emteqPRO requires minimal training necessary to set up the device, monitor data outcomes and obtain insights. We anticipate that the emteqPRO system will help bridge the gap between the lab and VR through multisensory recordings of valence, arousal, and behaviour in future studies, in a wide spectrum of disciplines and applications. Special attention was put on ease-of-use, thus reducing unnecessary complexity during the sensor set-up, intrusive wearability, quality of data, while also in the ease of integration with other physiological sensing modules and simulation engines. As such, our team provides an open-source SDK, developed for the Unity3D game engine, which allows researchers to stream data from the mask in real-time within their applications.

Virtual reality environments can be precisely controlled on every level and both real and virtual events can elicit similar physiological responses in people (Peñate et al., 2019; Marín-Morales et al., 2019). This allows for research findings in VR to be extrapolated to a wider range of real-life settings. Nearly all existing theories of decision-making emphasise the role of emotion and cognition in apprising a person, behaviour, or experience. EmteqPRO provides a sensitive method of testing hypotheses in a low-risk, controllable simulated environment. The system records affective responses without impacting the user’s virtual experience while obtaining naturalistic responses and reliable data readings. Affect values can be calculated from physiological data using machine learning algorithms hosted in the cloud services. This enables researchers with limited background in affect detection or signal processing methods to achieve immediate results and feedback, speeding up the process of analysis and aiding the iterative process of design of VR experiences to obtain desired results.

The provided Unity SDK enables researchers and designers to integrate emteqPRO’s affective capabilities into custom VR environments. Other game engines are going to be supported in the future, including Unreal Engine (“The Most Powerful Real-Time 3D Creation Tool - Unreal Engine”, 2021). Currently, in collaborations with different research groups and Universities, our team is developing expression and affect inference Machine Learning (ML) models which will be provided as part of the system’s real-time functionalities. These models will be used to analyse and classify the user’s affective state, using primarily using the dimensional model of affect. The analysis of the affective state will be based on data fusion and ML algorithms that recognise the user’s arousal and valence by combining all sensors included with emteqPRO (PPG, EMG, IMU, GSR and eye-tracking) into a broader understanding of the affective state.

The potential uses for emteqPRO technology are vast and span across numerous areas and applications. For example, in wellbeing and healthcare interventions the emteqPro can provide objective measures of affect which can in turn can aid diagnosis and treatments, and monitor training progression and effectiveness. Research is another obvious use case, as the system will be made open source for non-commercial use. Researchers can use the emteqPRO platform for studies measuring physiological data in VR. Entertainment industry where objective measures of affect can help designers and creators to measure what effect their content has on the users. In a world where understanding customer behaviour and related marketing strategies are a driving force for many changes and improvements across all industries, emteqPRO is perfectly positioned to provide designers with valuable data to aid their developments. A prime example of this is the current use of emteqPRO by the HS2 developers (High Speed 2—High-speed railway linking parts of Britain) where the emteqPro’s affect sensing technology is used to develop an efficient and stress-free station experience by placing passengers in a virtual replica of the station (“HS2 Passengers Use VR to Test HS2’s New Station Design | BIM+”, 2021).

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author Contributions

MG and JB have completed the first draft of the manuscript. MG has contributed to the development of Unity SDK, compiled included information, collected data, and revised all versions of the manuscript based on comments and feedback from all other authors. IM contributed to the development of the system, Unity SDK, and helped collect sample data. MF was responsible for the hardware development of the system. CN is the founder of the company that developed the emteqPRO platform. All authors contributed to manuscript revision.

Funding

This work is supported by Bournemouth University and Emteq ltd. via the Centre for Digital Entertainment (EPSRC Grant No. EP/L016540/1). This work was also partially funded by the NIHR Invention for Innovation (i4i) Programme (Grant No. NIHR201283).

Conflict of Interest

Authors JB, IM, MF, IK, and CN are employed by Emteq Limited. Biofeedback system Patent Number: 10398373 held by Emteq Limited. EmteqPro is a commercially available product designed by Emteq Limited.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

As authors of the paper, we would like to extend our thanks to all emteqlabs staff actively developing the emteqPRO platform. This includes but is not limited to Martin Gjoreski, Hristijan Gjoreski, James Archer, Claire Baert, Andrew Cleal, Milky Asefa, Daniel Shirley, Craig Hutchinson, Colin Smith, Christopher Nokes, Hugo Hall, Adam Lynas and Sophia Cox.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frvir.2022.781218/full#supplementary-material

Footnotes

1https://www.emteqlabs.com/

References

Al-Rawhani, M. A., Hu, C., Giagkoulovits, C., Annese, V. F., Cheah, B. C., Beeley, J., et al. (2020). Multimodal Integrated Sensor Platform for Rapid Biomarker Detection. IEEE Trans. Biomed. Eng. 67 (2), 614–623. doi:10.1109/TBME.2019.2919192

PubMed Abstract | CrossRef Full Text | Google Scholar

Alexandratos, V., Bulut, M., and Jasinschi, R. (2014). “Mobile Real-Time Arousal Detection,” in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (Florence, ItalyInstitute of Electrical and Electronics Engineers Inc), 4394–4398. doi:10.1109/ICASSP.2014.6854432

CrossRef Full Text | Google Scholar

Araujo, Ruy. Soprani. S., Douglas Ruy Soprani, D., Rodrigues Botelho, T., Rodrigues, C., Carvalho, C., Frizera, A., et al. (2014). “Platform for Multimodal Signal Acquisition for the Control of Lower Limb Rehabilitation Devices,” in NEUROTECHNIX 2014 - Proceedings of the 2nd International Congress on Neurotechnology, Electronics and Informatics, 49–55. doi:10.5220/0005138900490055

CrossRef Full Text | Google Scholar

Berg, L. P., and Vance., J. M. (20162016). Industry Use of Virtual Reality in Product Design and Manufacturing: A Survey. Virtual Reality 211 21 (1), 1–17. doi:10.1007/S10055-016-0293-9

CrossRef Full Text | Google Scholar

Boxtel, Anton. (2010a). “Facial EMG as a Tool for Inferring Affective States,” in Conference: Measuring Behavior.

Google Scholar

Boxtel, Anton. Van. (2010b). “Facial EMG as a Tool for Inferring Affective States,” in Proceedings of Measuring Behavior.

Google Scholar

Branson, R., and Mannheimer, P. (2004). Forehead Oximetry in Critically Ill Patients: the Case for a New Monitoring Site. Respir. Care Clin. 10, 359–367. doi:10.1016/j.rcc.2004.04.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Cacioppo, J. T., Martzke, J. S., Petty, R. E., and Tassinary, L. G. (1988). Specific Forms of Facial EMG Response Index Emotions during an Interview: From Darwin to the Continuous Flow Hypothesis of Affect-Laden Information Processing. J. Personal. Soc. Psychol. 54 (4), 592–604. doi:10.1037/0022-3514.54.4.592

CrossRef Full Text | Google Scholar

Cacioppo, J. T., Petty, R. E., Losch, M. E., and Kim, H. S. (1986). Electromyographic Activity over Facial Muscle Regions Can Differentiate the Valence and Intensity of Affective Reactions. J. Personal. Soc. Psychol. 50 (2), 260–268. doi:10.1037/0022-3514.50.2.260

CrossRef Full Text | Google Scholar

Cig, C., Kasap, Z., Egges, A., and Magnenat-Thalmann, N. (2010). “Realistic Emotional Gaze and Head Behavior Generation Based on Arousal and Dominance Factors,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Berlin, Heidelberg: Springer), 6459, 278–289. doi:10.1007/978-3-642-16958-8_26

CrossRef Full Text | Google Scholar

Corr, P. J. (2013). Approach and Avoidance Behaviour: Multiple Systems and Their Interactions. Emot. Rev. 5 (3), 285–290. doi:10.1177/1754073913477507

CrossRef Full Text | Google Scholar

Critchley, H. D. (2002). Review: Electrodermal Responses: What Happens in the Brain. Neuroscientist 8, 132–142. doi:10.1177/107385840200800209

PubMed Abstract | CrossRef Full Text | Google Scholar

D'mello, S. K., and Kory., J. (2015). A Review and Meta-Analysis of Multimodal Affect Detection Systems. ACM Comput. Surv. 47 (3), 1–36. doi:10.1145/2682899

CrossRef Full Text | Google Scholar

D'Mello, S., and Kory, J. (2012). Consistent but Modest. ICMI’12 - Proc. ACM Int. Conf. Multimodal Interaction, 31–38. doi:10.1145/2388676.2388686

CrossRef Full Text | Google Scholar

Dirican, A. C., and Göktürk, M. (2012). Involuntary Postural Responses of Users as Input to Attentive Computing Systems: An Investigation on Head Movements. Comput. Hum. Behav. 28 (5), 1634–1647. doi:10.1016/j.chb.2012.04.002

CrossRef Full Text | Google Scholar

Du, R., Li, D., and Varshney, A. (2019). “Geollery,” in Conference on Human Factors in Computing Systems - Proceedings. May. doi:10.1145/3290605.3300915

CrossRef Full Text | Google Scholar

emteqlabs (2020a). Improving Emotional And Psychological Well-Being. Available at: https://www.emteqlabs.com/wp-content/uploads/2020/11/Improving_Emotional_And_Psychological_Well-Being.pdf.

Google Scholar

emteqlabs (2020b). Measuring What Matters in Immersive Environments. 2020. Available at: https://www.emteqlabs.com/wp-content/uploads/2020/11/Measuring_what_Matters_in_Immersive_Environments.pdf

Google Scholar

emteqlabs (2021). An Imperative Developing Standards for Safety and Security in XR Environments. Available at: https://www.emteqlabs.com/wp-content/uploads/2021/02/emteqxrsi_white_paper-3.pdf

Google Scholar

Fertleman, C., Aubugeau-Williams, P., Sher, C., Lim, A.-N., Lumley, S., Delacroix, S., et al. (2018). A Discussion of Virtual Reality as a New Tool for Training Healthcare Professionals. Front. Public Health 6 (FEB), 44. doi:10.3389/fpubh.2018.00044

PubMed Abstract | CrossRef Full Text | Google Scholar

Freeman, D., Haselton, P., Freeman, J., Spanlang, B., Kishore, S., Albery, E., et al. (2018). Automated Psychological Therapy Using Immersive Virtual Reality for Treatment of Fear of Heights: A Single-Blind, Parallel-Group, Randomised Controlled Trial. The Lancet Psychiatry 5 (8), 625–632. doi:10.1016/S2215-0366(18)30226-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Fridlund, A. J., Schwartz, G. E., and Fowler, S. C. (1984). Pattern Recognition of Self-Reported Emotional State from Multiple-Site Facial EMG Activity during Affective Imagery. Psychophysiology 21 (6), 622–637. doi:10.1111/j.1469-8986.1984.tb00249.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Girardi, D., Lanubile, F., and Novielli, N. (2017). “Emotion Detection Using Noninvasive Low Cost Sensors,” in 2017 7th International Conference on Affective Computing and Intelligent Interaction, 2017 2018-January (January) (San Antonio, TX: ACII), 125–130. doi:10.1109/ACII.2017.8273589

CrossRef Full Text | Google Scholar

Gnacek, M., Garrido-Leal, D., Nieto Lopez, R., Seiss, E., Kostoulas, T., Balaguer-Ballester, E., et al. (2020). “Heart Rate Detection from the Supratrochlear Vessels Using a Virtual Reality Headset Integrated PPG Sensor,” in ICMI 2020 Companion - Companion Publication of the 2020 International Conference on Multimodal Interaction (Utrecht, Netherlands: Association for Computing Machinery, Inc), 210–214. doi:10.1145/3395035.3425323

CrossRef Full Text | Google Scholar

Governo, R., Eden-Green, B., Dawes, T., Mavridou, I., Giles, J., Rosten, C., et al. (2020). Evaluation of Facial Electromyographic Pain Responses in Healthy Participants. Pain Manag. 10 (6), 399–410. doi:10.2217/pmt-2020-0005

PubMed Abstract | CrossRef Full Text | Google Scholar

Gu, S., Wang, F., Patel, N. P., Bourgeois, J. A., and Huang, J. H. (2019). A Model for Basic Emotions Using Observations of Behavior in Drosophila. Front. Psychol. 10. doi:10.3389/fpsyg.2019.00781

PubMed Abstract | CrossRef Full Text | Google Scholar

Guerreiro, José., Martins, Raúl., Silva, Hugo., Lourenço, André., and Fred, Ana. (2013). BITalino - A Multimodal Platform for Physiological Computing. ICINCO 2013 - Proc. 10th Int. Conf. Inform. Control Automation Robotics 1, 500–506. doi:10.5220/0004594105000506

CrossRef Full Text | Google Scholar

HP Reverb G2 Omnicept Edition | HP® Official Site (2021). HP Reverb G2 Omnicept Edition | HP® Official Site. Available at: https://www.hp.com/us-en/vr/reverb-g2-vr-headset-omnicept-edition.html (Accessed December 15, 2021).

Google Scholar

HS2 Passengers Use VR to Test HS2’s New Station Design | BIM+ (2021). HS2 Passengers Use VR to Test HS2’s New Station Design | BIM+. Available at: https://www.bimplus.co.uk/technology/hs2-passengers-use-vr-test-hs2s-new-station-design/(Accessed June 22, 2021).

Google Scholar

Jihyoung Lee, Jihyoung., Matsumura, K., Yamakoshi, K.-i., Rolfe, P., Tanaka, S., and Yamakoshi, T. 2013. “Comparison between Red, Green and Blue Light Reflection Photoplethysmography for Heart Rate Monitoring during Motion.” In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Osaka, Japan: EMBS, 2013:1724–1727. Annu Int Conf IEEE Eng Med Biol Soc. doi:10.1109/EMBC.2013.6609852

CrossRef Full Text | Google Scholar

Li, A., Chen, S., Quan, S. F., Powers, L. S., and Roveda, J. M. (2020). A Deep Learning-Based Algorithm for Detection of Cortical Arousal during Sleep. Sleep 43 (12). doi:10.1093/sleep/zsaa120

PubMed Abstract | CrossRef Full Text | Google Scholar

Lou, J., Wang, Y., Nduka, C., Hamedi, M., Mavridou, I., Wang, F.-Y., et al. (2020). Realistic Facial Expression Reconstruction for VR HMD Users. IEEE Trans. Multimedia 22 (3), 730–743. doi:10.1109/TMM.2019.2933338

CrossRef Full Text | Google Scholar

Magnée, M. J. C. M., Stekelenburg, J. J., Kemner, C., and de Gelder, B. (2007). Similar Facial Electromyographic Responses to Faces, Voices, and Body Expressions. NeuroReport 18 (4), 369–372. doi:10.1097/WNR.0b013e32801776e6

PubMed Abstract | CrossRef Full Text | Google Scholar

Marín-Morales, J., Higuera-Trujillo, J. L., Greco, A., Guixeres, J., Llinares, C., Pasquale Scilingo, Enzo., et al. (2019). Real vs. Immersive-Virtual Emotional Experience: Analysis of Psycho-Physiological Patterns in a Free Exploration of an Art Museum. PLOS ONE 14 (10), e0223881. doi:10.1371/journal.pone.0223881

PubMed Abstract | CrossRef Full Text | Google Scholar

Marín-Morales, J., Llinares, C., Guixeres, J., and Alcañiz, M. (2020). Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing. Sensors 20 (18), 5163–5226. doi:10.3390/S20185163

CrossRef Full Text | Google Scholar

Martens, M. A., Antley, A., Freeman, D., Slater, M., Harrison, P. J., and Tunbridge, E. M. (2019). It Feels Real: Physiological Responses to a Stressful Virtual Reality Environment and its Impact on Working Memory. J. Psychopharmacol. 33 (10), 1264–1273. doi:10.1177/0269881119860156

CrossRef Full Text | Google Scholar

Martínez-Navarro, J., Bigné, E., Guixeres, J., Alcañiz, M., and Torrecilla, C. (2019). The Influence of Virtual Reality in E-Commerce. J. Business Res. 100 (July), 475–482. doi:10.1016/J.JBUSRES.2018.10.054

CrossRef Full Text | Google Scholar

Mavridou, I., Perry, M., Seiss, E., Kostoulas, T., and Balaguer-Ballester, E. (2019a). “Emerging Affect Detection Methodologies in VR and Future Directions,’’ in Virtual Reality International Conference VRIC 2019, Laval, France 35.

Google Scholar

Mavridou, I., Seiss, E., Kostoulas, T., Hamedi, M., Balaguer-Ballester, E., and Nduka, C. (2019b). “Introducing the EmteqVR Interface for Affect Detection in Virtual Reality,” in 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (Cambridge, UK: Institute of Electrical and Electronics Engineers Inc), 83–84. doi:10.1109/ACIIW.2019.8925297ACIIW

CrossRef Full Text | Google Scholar

Mavridou, I., Seiss, E., Hamedi, M., Balaguer-Ballester, E., and Nduka, C. (2018a). Towards Valence Detection from EMG for Virtual Reality Applications. Available at: https://www.icdvrat.org/

Google Scholar

Mavridou, I., Hamedi, M., Fatoorechi, M., Archer, J., Cleal, A., Balaguer-Ballester, E., et al. (2017a). “Using Facial Gestures to Drive Narrative in VR,” in SUI 2017 - Proceedings of the 2017 Symposium on Spatial User Interaction (Brighton, UK: Association for Computing Machinery, Inc). doi:10.1145/3131277.3134366152

CrossRef Full Text | Google Scholar

Mavridou, I., McGhee, J. T., Hamedi, M., Fatoorechi, M., Cleal, A., Balaguer-Ballester, E., et al. (2017b). “Faceteq,” in ACM International Conference Proceeding Series (Laval, France: Association for Computing Machinery). doi:10.1145/3110292.3110302

CrossRef Full Text | Google Scholar

Mavridou, I., McGhee, J. T., Hamedi, M., Fatoorechi, M., Cleal, A., Ballaguer-Balester, E., et al. (2017c). “FACETEQ Interface Demo for Emotion Expression in VR,” in Proceedings - IEEE Virtual Reality (IEEE), 441–442. doi:10.1109/VR.2017.7892369

CrossRef Full Text | Google Scholar

Mavridou, I., Seiss, E., Kostoulas, T., Nduka, C., and Balaguer-Ballester, E. (2018b). “Towards an Effective Arousal Detection System for Virtual Reality,” in Proceedings of the Human-Habitat for Health (H3): Human-Habitat Multimodal Interaction for Promoting Health and Well-Being in the Internet of Things Era - 20th ACM International Conference on Multimodal Interaction, ICMI 2018 (Boulder, USA: Association for Computing Machinery, Inc). doi:10.1145/3279963.3279969

CrossRef Full Text | Google Scholar

Menshikova, G., Yu, Bayakovski., Luniakova, E., Pestun, M., and Zakharkin, D. (2020). “Virtual Reality Technology for the Visual Perception Study.” Accessed July 28, 2020.

Google Scholar

Otsuka, S., Kurosaki, K., and Ogawa, M. (2017). “Physiological Measurements on a Gaming Virtual Reality Headset Using Photoplethysmography: A Preliminary Attempt at Incorporating Physiological Measurement with Gaming,” in IEEE Region 10 Annual International Conference, Proceedings/TENCON (Penang, Malaysia: Institute of Electrical and Electronics Engineers Inc), 1251–1256. December, 2017). doi:10.1109/TENCON.2017.8228049

CrossRef Full Text | Google Scholar

Peñate, W., Rivero, F., Viña, C., Herrero, M., Betancort, M., De la Fuente, J., et al. (2019). The Equivalence between Virtual and Real Feared Stimuli in a Phobic Adult Sample: A Neuroimaging Study. Jcm 8 (12), 2139. doi:10.3390/jcm8122139

CrossRef Full Text | Google Scholar

Raghupathi, W., and Raghupathi, V. (2014). Big Data Analytics in Healthcare: Promise and Potential. Health Inf. Sci. Syst. 2 (1). doi:10.1186/2047-2501-2-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Russell, J. A. (2003). Core Affect and the Psychological Construction of Emotion. Psychol. Rev. 110 (1), 145–172. doi:10.1037/0033-295X.110.1.145

PubMed Abstract | CrossRef Full Text | Google Scholar

Seshadri, D. R., Li, R. T., Voos, James. E., Rowbottom, James. R., Alfes, Celeste. M., Zorman, Christian. A., et al. (20192019). Wearable Sensors for Monitoring the Physiological and Biochemical Profile of the Athlete. Npj Digit. Med. 2 (1), 1–16. doi:10.1038/s41746-019-0150-9

CrossRef Full Text | Google Scholar

Shuman, V., Sander, D., and Scherer, K. R. (2013). Levels of Valence. Front. Psychol. 4 (MAY), 261. doi:10.3389/fpsyg.2013.00261

PubMed Abstract | CrossRef Full Text | Google Scholar

Siang, Chee., Rose, V., Stewart, I., Jenkins, K. G., Ang, C. S., and Matsangidou, M.. 2018. “A Scoping Review Exploring the Feasibility of Virtual Reality Technology Use with Individuals Living with Dementia.” doi:10.2312/egve.20181325

CrossRef Full Text | Google Scholar

Siegel, E. H., Wei, J. Gomes., Oliviera, A., Sundaramoorthy, M., Smathers, P., Vankipuram, K., et al. (2021). HP Omnicept Cognitive Load Database (HPO-CLD) – Developing a Multimodal Inference Engine for Detecting Real-time Mental Workload in VR. Technical Report. Palo Alto, CA: HP Labs.

Google Scholar

Suyo, M. I. V., and Núñez-Torres, P. (2016). EV149. ”33 European Psychiatry S, S397–S398. doi:10.1016/J.EURPSY.2016.01.1134

CrossRef Full Text | Google Scholar

Tan, J.-W., Andrade, A. O., Li, H., Walter, S., Hrabal, D., Rukavina, S., et al. (2016). Recognition of Intensive Valence and Arousal Affective States via Facial Electromyographic Activity in Young and Senior Adults. PLoS ONE 11 (1), e0146691. doi:10.1371/journal.pone.0146691

PubMed Abstract | CrossRef Full Text | Google Scholar

Teague, C. N., Heller, J. A., Nevius, B. N., Carek, A. M., Mabrouk, S., Garcia-Vicente, F., et al. (2020). A Wearable, Multimodal Sensing System to Monitor Knee Joint Health. IEEE Sensors J. 20 (18), 10323–10334. doi:10.1109/JSEN.2020.2994552

CrossRef Full Text | Google Scholar

Thayer, R. E. (1978). Toward a Psychological Theory of Multidimensional Activation (Arousal). Motiv. Emot. 2 (1), 1–34. doi:10.1007/BF00992729

CrossRef Full Text | Google Scholar

The Most Powerful Real-Time 3D Creation Tool - Unreal Engine (2021). The Most Powerful Real-Time 3D Creation Tool - Unreal Engine. Available at: https://www.unrealengine.com/en-US/ (Accessed December 17, 2021).

Google Scholar

Toll, R. (2020). To See or Not to See : A Study on Capillary Refill, Vol. 1732. Linköping: Linköping University Electronic Press. doi:10.3384/diss.diva-164907

CrossRef Full Text | Google Scholar

Unity Real-Time Development Platform | 3D, 2D VR & AR Engine (2020). Unity Real-Time Development Platform | 3D, 2D VR & AR Engine. Available at: https://unity.com/ (Accessed June 22, 2021).

Google Scholar

van Dooren, M., de Vries, J. J. G., and Janssen, J. H. (2012). Emotional Sweating across the Body: Comparing 16 Different Skin Conductance Measurement Locations. Physiol. Behav. 106 (2), 298–304. doi:10.1016/j.physbeh.2012.01.020

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, C.-A., Baird, T., Huang, J., Coutinho, J. D., Brien, D. C., and Munoz, D. P. (2018). Arousal Effects on Pupil Size, Heart Rate, and Skin Conductance in an Emotional Face Task. Front. Neurol. 9 (December), 1029. doi:10.3389/fneur.2018.01029

PubMed Abstract | CrossRef Full Text | Google Scholar

Weninger, F., Eyben, F., Schuller, B. W., Mortillaro, M., and Scherer, K. R. (2013). On the Acoustics of Emotion in Audio: What Speech, Music, and Sound Have in Common. Front. Psychol. 4 (MAY), 292. doi:10.3389/FPSYG.2013.00292

PubMed Abstract | CrossRef Full Text | Google Scholar

What We Talk About When We Talk About Emotions (2016). What We Talk about when We Talk about EmotionsCell. Cell Press. doi:10.1016/j.cell.2016.11.029

CrossRef Full Text | Google Scholar

Winter, B. B., and Webster, J. G. (1983). Driven-Right-Leg Circuit Design. IEEE Trans. Biomed. Eng. BME-30 (1), 62–66. doi:10.1109/TBME.1983.325168

PubMed Abstract | CrossRef Full Text | Google Scholar

Zeile, P., and Resch, B. (2018). Combining Biosensing Technology and Virtual Environments for Improved Urban Planning. giforum 1, 344–357. Accessed December 15. doi:10.1553/giscience2018_01_s344

CrossRef Full Text | Google Scholar

Zhang, J., Lipp, O. V., OeiOei, T. P. S., and Zhou, R. (2011). The Effects of Arousal and Valence on Facial Electromyographic Asymmetry during Blocked Picture Viewing. Int. J. Psychophysiology 79 (3), 378–384. doi:10.1016/j.ijpsycho.2010.12.005

CrossRef Full Text | Google Scholar

Zhang, M. (2020). PET+EMDR+VR to Reduce PTSD Symptoms. Jpbr 2 (2), p39. doi:10.22158/jpbr.v2n2p39

CrossRef Full Text | Google Scholar

Keywords: virtual reality, biosensors, affect, electromyography, photoplethysmography, biomedical measurement, valence, arousal

Citation: Gnacek M, Broulidakis J, Mavridou I, Fatoorechi M, Seiss E, Kostoulas T, Balaguer-Ballester E, Kiprijanovska I, Rosten C and Nduka C (2022) emteqPRO—Fully Integrated Biometric Sensing Array for Non-Invasive Biomedical Research in Virtual Reality. Front. Virtual Real. 3:781218. doi: 10.3389/frvir.2022.781218

Received: 22 September 2021; Accepted: 10 January 2022;
Published: 11 March 2022.

Edited by:

Ruofei Du, Google, United States

Reviewed by:

Jun Gong, Apple, United States
Weiya Chen, Huazhong University of Science and Technology, China

Copyright © 2022 Gnacek, Broulidakis, Mavridou, Fatoorechi, Seiss, Kostoulas, Balaguer-Ballester, Kiprijanovska, Rosten and Nduka. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Charles Nduka, info@emteqlabs.com

Download