CiteScore 4.7
More on impact ›

Review ARTICLE

Front. Robot. AI, 23 June 2020 | https://doi.org/10.3389/frobt.2020.00071

Elderly Fall Detection Systems: A Literature Survey

  • 1Department of Computer Science, Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, University of Groningen, Groningen, Netherlands
  • 2Computer Science, Faculty of Information & Communication Technology, University of Malta, Msida, Malta

Falling is among the most damaging event elderly people may experience. With the ever-growing aging population, there is an urgent need for the development of fall detection systems. Thanks to the rapid development of sensor networks and the Internet of Things (IoT), human-computer interaction using sensor fusion has been regarded as an effective method to address the problem of fall detection. In this paper, we provide a literature survey of work conducted on elderly fall detection using sensor networks and IoT. Although there are various existing studies which focus on the fall detection with individual sensors, such as wearable ones and depth cameras, the performance of these systems are still not satisfying as they suffer mostly from high false alarms. Literature shows that fusing the signals of different sensors could result in higher accuracy and lower false alarms, while improving the robustness of such systems. We approach this survey from different perspectives, including data collection, data transmission, sensor fusion, data analysis, security, and privacy. We also review the benchmark data sets available that have been used to quantify the performance of the proposed methods. The survey is meant to provide researchers in the field of elderly fall detection using sensor networks with a summary of progress achieved up to date and to identify areas where further effort would be beneficial.

1. Introduction

More than nine percent of the population of China was aged 65 or older in 2015 and within 20 years (2017–2037) it is expected to reach 20%1. According to the World Health Organization (WHO), around 646 k fatal falls occur each year in the world, the majority of whom are suffered by adults older than 65 years (WHO, 2018). This makes it the second reason for unintentional injury death, followed by road traffic injuries. Globally, falls are a major public health problem for the elderly. Needless to say, the injuries caused by falls that elderly people experience have many consequences to their families, but also to the healthcare systems and to the society at large.

As illustrated in Figure 1, Google Trends2 show that fall detection has drawn increasing attention from both academia and industry, especially in the last couple of years, where a sudden increase can be observed. Moreover, on the same line, the topic of fall-likelihood prediction is very significant too, which is coupled with some applications focused on prevention and protection.

FIGURE 1
www.frontiersin.org

Figure 1. Interest of fall detection over time, from January 2004 to December 2019. The data is taken from Google Trends with the search topic “fall detection.” The values are normalized with the maximum interest, such that the highest interest has a value of 100.

El-Bendary et al. (2013) reviewed the trends and challenges of elderly fall detection and prediction. Detection techniques are concerned with recognizing falls after they occur and trigger an alarm to emergency caregivers, while predictive methods aim to forecast fall incidents before or during their occurrence, and therefore allow immediate actions, such as the activation of airbags.

During the past decades, much effort has been put into these fields to improve the accuracy of fall detection and prediction systems as well as to decrease the false alarms. Figure 2 shows the top 25 countries in terms of the number of publications about fall detection from the year 1945 to 2020. Most of the publications originate from the United States, followed by England, China, and Germany, among others. The data indicates that developed countries invest more in conducting research in this field than others. Due to higher living standards and better medical resources, people in developed countries are more likely to have longer life expectancy, which results in a higher aging population in such countries (Bloom et al., 2011).

FIGURE 2
www.frontiersin.org

Figure 2. (A) A map and (B) a histogram of publications on fall detection by countries and regions from 1945 to 2020.

In this survey paper, we provide a holistic overview of fall detection systems, which is aimed for a broad readership to become abreast with the literature in this field. Besides fall detection modeling techniques, this review covers other topics including issues pertaining to data transmission, data storage and analysis, and security and privacy, which are equally important in the development and deployment of such systems.

The other parts of the paper are organized as follows. In section 2, we start by introducing the types of fall and reviewing other survey papers to illustrate the research trend and challenges up to date, followed by a description of our literature search strategy. Next, in section 3 we introduce hardware and software components typically used in fall detection systems. Sections 4 and 5 give an overview of fall detection methods that rely on both individual or a collection of sensors. In section 6, we address issues of security and privacy. Section 7 introduces projects and applications of fall detection. In section 8, we provide a discussion about the current trends and challenges, followed by a discussion on challenges, open issues, and other aspects on future directions. Finally, we provide a summary of the survey and draw conclusions in section 9.

2. Types of Falls and Previous Reviews on Elderly Fall Detection

2.1. Types of Falls

The impact and consequences of a fall can vary drastically depending upon various factors. For instance, falling whilst either walking, standing, sleeping or sitting on a chair, share some characteristics in common but also have significant differences between them.

In El-Bendary et al. (2013), the authors group the types of falls in three basic categories, namely forward, lateral, and backward. Putra et al. (2017) divided falls into a broader set of categories, namely forward, backward, left-side, right-side, blinded-forward, and blinded-backward, and in the study by Chen et al. (2018) falls are grouped in more specific categories including fall lateral left lie on the floor, fall lateral left and sit up from floor, fall lateral right and lie on the floor, fall lateral and left sit up from the floor, fall forward and lie on the floor, and fall backward and lie on the floor.

Besides the direction one takes whilst falling another important aspect is the duration of the fall, which may be influenced by age, health and physical condition, along with any consequences of activities that the individual was undertaking. Elderly people may suffer from longer duration of falls, because of motion with low speed in the activity of daily living. For instance, in fainting or chest pain related episodes an elderly person might try to rest by a wall before lying on the floor. In other situations, such as injuries due to obstacles or dangerous settings (e.g., slanting or uneven pavement or surfaces), an elderly person might fall abruptly. The age and gender of the subject also play a role in the kinematics of falls.

The characteristics of different types of falls are not taken into consideration in most of the work on fall detection surveyed. In most of the papers to date, data sets typically contain falls that are simulated by young and healthy volunteers and do not cover all types of falls mentioned above. The resulting models from such studies, therefore, do not lead to models that generalize well enough in practical settings.

2.2. Review of Previous Survey Papers

There are various review papers that give an account of the development of fall detection from different aspects. Due to the rapid development of smart sensors and related analytical approaches, it is necessary to re-illustrate the trends and development frequently. We choose the most highly cited review papers, from 2014 to 2020, based on Google Scholar and Web of Science, and discuss them below. These selected review papers demonstrate the trends, challenges, and development in this field. Other significant review papers before 2014 are also covered in order to give sufficient background of earlier work.

Chaudhuri et al. (2014) conducted a systematic review of fall detection devices for people of different ages (excluding children) from several perspectives, including background, objectives, data sources, eligibility criteria, and intervention methods. More than 100 papers were selected and reviewed. The selected papers were divided into several groups based on different criteria, such as the age of subjects, method of evaluation and devices used in detection systems. They noted that most of the studies were based on synthetic data. Although simulated data may share common features with real falls, a system trained on such data cannot reach the same reliability of those that use real data.

In another survey, Zhang et al. (2015) focused on vision-based fall detection systems and their related benchmark data sets, which have not been discussed in other reviews. Vision-based approaches of fall detection were divided into four categories, namely individual single RGB cameras, infrared cameras, depth cameras, and 3D-based methods using camera arrays. Since the advent of depth cameras, such as Microsoft Kinect, fall detection with RGB-D cameras has been extensively and thoroughly studied due to the inexpensive price and easy installation. Systems which use calibrated camera arrays also saw prominent uptake. Because such systems rely on many cameras positioned at different viewpoints, challenges related to occlusion are typically reduced substantially, and therefore result in less false alarm rates. Depth cameras have gained particular popularity because unlike RGB camera arrays they do not require complicated calibration and they are also less intrusive of privacy. Zhang et al. (2015) also reviewed different types of fall detection methods, that rely on the activity/inactivity of the subjects, shape (width-to-height ratio), and motion. While that review gives a thorough overview of vision-based systems, it lacks an account of other fall detection systems that rely on non-vision sensors such as wearable and ambient ones.

Further to the particular interest in depth cameras, Cai et al. (2017) reviewed the benchmark data sets acquired by Microsoft Kinect and similar cameras. They reviewed 46 public RGB-D data sets, 20 of which are highly used and cited. They compared and highlighted the characteristics of all data sets in terms of their suitability to certain applications. Therefore, the paper is beneficial for scientists who are looking for benchmark data sets for the evaluation of new methods or new applications.

Based on the review provided by Chen et al. (2017a), individual depth cameras and inertial sensors seem to be the most significant approaches in vision- and non-vision-based systems, respectively. In their review, the authors concluded that fusion of both types of sensor resulted in a system that is more robust than a system relying on one type of sensor.

The ongoing and fast development in electronics have resulted in more miniature and cheaper electronics. For instance, the survey by Igual et al. (2013) noted that low-cost cameras and accelerometers embedded in smartphones may offer the most sensible technological choice for the investigation of fall detection. Igual et al. (2013) identified two main trends on how research is progressing in this field, namely the use of vision and smartphone-based sensors that give input and the use of machine learning for the data analysis. Moreover, they reported the following three main challenges: (i) real-world deployment performance, (ii) usability, and (iii) acceptance. Usability refers to how practical the elderly people find the given system. Because of the issue of privacy and intrusive characteristics of some sensors, there is a lack of acceptance for the elderly to live in an environment monitored by sensors. They also pointed out several issues which need to be taken into account, such as smartphone limitations (e.g., people may not carry smartphones all the time with them), privacy concerns, and the lack of benchmark data sets of realistic falls.

The survey papers mentioned above focus mostly on the different types of sensors that can be used for fall detection. To the best of our knowledge, there are no literature surveys that provide a holistic review of fall detection systems in terms of data acquisition, data analysis, data transport and storage, sensor networks and Internet of Things (IoT) platforms, as well as security and privacy, which are significant in the deployment of such systems.

2.3. Key Results of Pioneering Papers

In order to illustrate a timeline of fall detection development, in this section we focus on the key and pioneering papers. Through manual filtering of papers using the web of science, one can find the trendsetting and highly cited papers in this field. By analyzing retrieved articles using citespace one can find that fall detection research first appeared in the 1990s, beginning with the work by Lord and Colvin (1991) and Williams et al. (1998). A miniature accelerometer and microcomputer chip embedded in a badge was used to detect falls (Lord and Colvin, 1991), while Williams et al. (1998) applied a piezoelectric shock sensor and a mercury tilt switch which monitored the orientation of the body to detect falls. At first, most studies were based on accelerometers including the work by Bourke et al. (2007). In their work, they compared which of the trunk and thigh offer the best location to attach the sensor. Their results showed that a person's trunk is a better location in comparison to the thigh, and they achieved 100% specificity with a certain threshold value with a sensor located in the trunk. This method was the state-of-the-art at the time, which undoubtedly supported it in becoming the most highly cited paper in the field.

At the time the trend was to use individual sensors for detection, within which another key paper by Bourke and Lyons (2008) was proposed to explore the problem at hand by using a single gyroscope that measures three variables, namely angular velocity, angular acceleration, and the change in the subject's trunk-angle. If the values of these three variables in a particular instance are above some empirically determined thresholds, then that instance is flagged as a fall. Three thresholds were set to distinguish falls from non-falls. Falls are detected when the angular velocity of a subject is greater than the fall threshold, and the angular acceleration of the subject is greater than the second fall threshold, and the change in the trunk-angle of the subject is greater than the third fall threshold. They reported accuracy of 100% on a data set with only four kinds of falls and 480 movements simulated by young volunteers. However, for those classifiers, which are based solely on either accelerometers or gyroscopes, are argued to suffer from insufficient robustness (Tsinganos and Skodras, 2018). Later, Li et al. (2009) investigated fusion of gyroscope and accelerometer data for the classification of falls and non-falls. In their work, they demonstrated how a fusion based approach resulted in a more robust classification. For instance, it could distinguish falls more accurately from certain fall-like activities, such as sitting down quickly and jumping, which is hard to detect using a single accelerometer. This work had inspired further research on sensor fusion. These two types of sensors can nowadays be found in all smart phones (Zhang et al., 2006; Dai et al., 2010; Abbate et al., 2012).

Besides the two non-vision based types of sensors mentioned above, vision-based sensors, such as surveillance cameras, and ambience-based, started becoming an attractive alternative. Rougier et al. (2011b) proposed a shape matching technique to track a person's silhouette through a video sequence. The deformation of the human shape is then quantified from the silhouettes based on shape analysis methods. Finally, falls are classified from normal activities using a Gaussian mixture model. After surveillance cameras, depth cameras also attracted substantial attention in this field. The earliest research which applied Time-of-Flight (TOF) depth camera was conducted in 2010 by Diraco et al. (2010). They proposed a novel approach based on visual sensors, which does not require landmarks, calibration patterns or user intervention. A ToF camera is, however, expensive and has low image resolution. Following that, the Kinect depth camera was first used in 2011 by Rougier et al. (2011a). Two features, human centroid height and velocity of body, were extracted from depth information. A simple threshold based algorithm was applied to detect falls and an overall success rate of 98.7% was achieved.

After the introduction of Kinect by Microsoft, there was a large shift in research from accelerometers to depth cameras. Accelerometers and depth cameras have become the most popular individual and combined sensors (Li et al., 2018). The combination of these two sensors achieved a substantial improvement when compared to the individual use of the sensors separately.

2.4. Strategy of the Literature Search

We use two databases, namely Web of Science and Google Scholar, to search for relevant literature. Since the sufficient advancements have been made at a rapid pace recently, searches included articles that were published in the last 6 years (since 2014). We also consider, all survey papers that were published on the topic of fall detection. Moreover, we give an account of all relevant benchmark data sets that have been used in this literature.

For the keywords “fall detection”, 4,024 and 575,000 articles were found for the above two mentioned databases, respectively, since 2014. In order to narrow down our search to the more relevant articles we compiled a list of the most frequently used keywords that we report in Table 1.

TABLE 1
www.frontiersin.org

Table 1. The most frequently used keywords in the topic of fall detection.

We use the identified keywords above to generate the queries listed in Table 2 in order to make the search more specific to the three classes of sensors that we are interested in. For the retrieved articles, we discuss their contributions and keep only those that are truly relevant to our survey paper. For instance, articles that focus on rehabilitation after falls, and causes of falls, among others, are filtered out manually. This process, which is illustrated in Figure 3, ends up with a total of 87 articles, 13 of which describe benchmark data sets.

TABLE 2
www.frontiersin.org

Table 2. Search queries used in Google Scholar and Web of Science for the three types of sensor and sensor fusion.

FIGURE 3
www.frontiersin.org

Figure 3. Illustration of the literature search strategy. The wearable-based queries in Table 2 return 28 articles. The vision- and ambient-based queries return 31 articles, and the sensor fusion queries return 28 articles.

3. Hardware and Software Components Involved in a Fall Detection System

Most of the research of fall detection share a similar system architecture, which can be divided into four layers, namely Physiological Sensing Layer (PSL), Local Communication Layer (LCL), Information Processing Layer (IPL), and User application Layer (UAL), as suggested by Ray (2014) and illustrated in Figure 4.

FIGURE 4
www.frontiersin.org

Figure 4. The main components typically present within fall detection system architectures include the illustrated sequence of four layers. Data is collected in the physiological sensing layer, transferred through the local communication layer, then it is analyzed in the information processing layer, and finally the results are presented in the user application layer.

PSL is the fundamental layer that contains various (smart) sensors used to collect physiological and ambient data from the persons being monitored. The most commonly used sensors nowadays include accelerometers that sense acceleration, gyroscopes that detect angular velocity, and magnetometers which sense orientation. Video surveillance cameras, which provide a more traditional means of sensing human activity, are also often used but are installed in specific locations, typically with fixed fields of views. More details about PSL are discussed in sections 4.1 and 5.1.

The next layer, namely LCL, is responsible for sending the sensor signals to the upper layers for further processing and analysis. This layer may have both wireless and wired methods of transmission, connected to local computing facilities or to cloud computing platforms. LCL typically takes the form of one (or potentially more) communication protocols, including wireless mediums like cellular, Zigbee, Bluetooth, WiFi, or even wired connections. We provide more details on LCL in sections 4.2 and 5.2.

IPL is a key component of the system. It includes hardware and software components, such as micro-controller, to analyze and transfer data from PSL to higher layers. In terms of software components, different kinds of algorithms, such as threshold, conventional machine learning, deep learning, and deep reinforcement learning are discussed in sections 4.3, 5.3, and 8.1.

Finally, the UAL concerns applications that assist the users. For instance, if a fall is detected in the IPL, a notification can first be sent to the user and if the user confirms the fall or does not answer, an alarm is sent to the nearest emergency caregivers who are expected to take immediate action. There are plenty of other products like Shimmer and AlertOne, which have been deployed as commercial applications to users. We also illustrate other different kinds of applications in section 7.

4. Fall Detection Using Individual Sensors

4.1. Physiological Sensing Layer (PSL) of Individual Sensors

As mentioned above, fall detection research applied either a single sensor or fusion by multiple sensors. The methods of collecting data are typically divided into four main categories, namely individual wearable sensors, individual visual sensors, individual ambient sensors and data fusion by sensor networks. Whilst some literature groups visual and ambient sensors together we treat them as two different categories in this survey paper due to visual sensors becoming more prominent as a detection method with the advent of depth cameras (RGBD), such as the Kinect.

4.1.1. Individual Wearable Sensors

Falls may result in key physiological variations of the human body, which provide a criterion to detect a fall. By measuring various human body related attributes using accelerometers, gyroscopes, glucometers, pressure sensors, ECG (Electrocardiography), EEG (Electroencephalography), or EOG (Electromyography), one can detect anomalies within subjects. Due to the advantages of mobility, portability, low cost, and availability, wearable devices are regarded as one of the key types of sensors for fall detection and have been widely studied. Numerous studies have been conducted to investigate wearable devices, which are regarded as a promising direction to study fall detection and prediction.

Based on our search criteria and filtering strategy (Tables 1, 2), 28 studies, including eight papers focusing on public data sets, focusing on fall detection by individual wearable devices are selected and described to illustrate trends and challenges of fall detection during the past 6 years. Some conclusions can be drawn based on the literature during the past 6 years in comparison to the studies before 2014. From Table 3, we note that studies applying accelerometers account for a large percentage of research in this field. To the best of our knowledge, only Xi et al. (2017) deployed electromyography to detect falls, and 19 out of 20 papers applied an accelerometer to detect falls. Although the equipment used, such as Shimmer nodes, smartphones, and smart watches, often contain other sensors like gyroscopes and magnetometers, these sensors were not used to detect falls. Bourke et al. (2007) also found that accelerometers are regarded as the most popular sensors for fall detection mainly due to its affordable cost, easy installation and relatively good performance.

TABLE 3
www.frontiersin.org

Table 3. Fall detection using individual wearable devices from 2014 to 2020.

Although smartphones have gained attention for studying falls, the underlying sensors of systems using them are still accelerometers and gyroscopes (Shi et al., 2016; Islam et al., 2017; Medrano et al., 2017; Chen et al., 2018). Users are more likely to carry smartphones all day rather than extra wearable devices, so smartphones are useful for eventual real-world deployments (Zhang et al., 2006; Dai et al., 2010).

4.1.2. Individual Visual Sensors

Vision-based detection is another prominent method. Extensive effort in this direction has been demonstrated, and some of which (Akagündüz et al., 2017; Ko et al., 2018; Shojaei-Hashemi et al., 2018) show promising performance. Although most cameras are not as portable as wearable devices, they offer other advantages which deem them as decent options depending upon the scenario. Most static RGB cameras are not intrusive and wired hence there is no need to worry about battery limitations. Work on demonstrating viability of vision-based approaches have been demonstrated which makes use of infrared cameras (Mastorakis and Makris, 2014), RGB cameras (Charfi et al., 2012), and RGB-D depth cameras (Cai et al., 2017). One main challenge of vision-based detection is the potential violation of privacy due to the levels of detail that cameras can capture, such as personal information, appearance, and visuals of the living environment.

Further to the information that we report in Table 4, we note that RGB, depth, and infrared cameras are the three main visual sensors used. Moreover, it can be noted that the RGB-D camera (Kinect) is among the most popular vision-based sensor, as 12 out of 22 studies applied it in their work. Nine out of the other 10 studies used RGB cameras including cameras built into smartphones, web cameras, and monocular cameras, while the remaining study used an infrared camera within Kinect, to conduct their experiments.

TABLE 4
www.frontiersin.org

Table 4. Fall detection using individual vision-based devices from 2014 to 2020.

Static RGB cameras are the most widely used sensors within the vision-based fall detection research conducted before 2004, although the accuracies of RGB camera-based detection systems vary drastically due to environmental conditions, such as illumination changes—which often results in limitations during the night. Besides, RGB cameras are inherently likely to have a higher false alarm rate because some deliberate actions like lying on the floor, sleeping or sitting down abruptly are not easily distinguished by frames captured by RGB cameras. With the launch of the Microsoft Kinect, which consists of an RGB camera, a depth sensor, and a multi-array microphone, it stimulated a trend in 3D data collection and analysis, causing a shift from RGB to RGB-D cameras. Kinect depth cameras took the place of the traditional RGB cameras and became the second popular sensors in the field of fall detection after 2014 (Xu et al., 2018).

In the last years, we are seeing an increased interest in the use of wearable cameras for the detection of falls. For instance, Ozcan and Velipasalar (2016) tried to exploit the cameras on smartphones. Smartphones were attached to the waists of subjects and their inbuilt cameras were used to record visual data. Ozcan et al. (2017) investigated how web cameras (e.g., Microsoft LifeCam) attached to the waists of subjects can contribute to fall detection. Although both approaches are not yet practical to be deployed in real applications, they show a new direction, which combines the advantages of wearable and visual sensors.

Table 4 reports the work conducted for individual vision-based sensors. The majority of research still makes use of simulated data. Only two studies use real world data; the one by Boulard et al. (2014) has actual fall data and the other by Stone and Skubic (2015) has mixed data, including 9 genuine falls and 445 simulated falls by trained stunt actors. In contrast to the real data sets from the work of Klenk et al. (2016) collected by wearable devices, there are few purely genuine data sets collected in real life scenarios using individual visual sensors.

4.1.3. Individual Ambient Sensors

The ambient sensor provides another non-intrusive means of fall detection. Sensors like active infrared, RFID, pressure, smart tiles, magnetic switches, Doppler Radar, ultrasonic, and microphone are used to detect the environmental changes due to falling as shown in Table 5. It provides an innovative direction in this field, which is passive and pervasive detection. Ultra-sonic sensor network systems are one of the earliest solutions in fall detection systems. Hori et al. (2004) argues that one can detect falls by putting a series of spatially distributed sensors in the space where elderly persons live. In Wang et al. (2017a,b), a new fall detection approach which uses ambient sensors is proposed. It relies on Wi-Fi which, due to its non-invasive and ubiquitous characteristics, is gaining more and more popularity. However, the studies by Wang et al. (2017a,b) are limited in terms of multi-person detection due to their classifiers not being robust enough to distinguish new subjects and environments. In order to tackle this issue, other studies have developed more sophisticated methods. These include the Aryokee (Tian et al., 2018) and FallDeFi (Palipana et al., 2018) systems. The Aryokee system is ubiquitous, passive and uses RF-sensing methods. Over 140 people were engaged to perform 40 kinds of activities in different environments for the collection of data and a convolutional neural network was utilized to classify falls. Palipana et al. (2018) developed a fall detection technique named FallDeFi, which is based on WiFi signals as the enabling sensing technology. They provided a system applying time-frequency of WiFi Channel State Information (CSI) and achieved above 93% average accuracy.

TABLE 5
www.frontiersin.org

Table 5. Fall detection using individual ambient devices from 2014 to 2020.

RF-sensing technologies have also been widely applied to other recognition activities beyond fall detection (Zhao et al., 2018; Zhang et al., 2019) and even for subtle movements. Zhao et al. (2018) studied human pose estimation with multiple persons. Their experiment showed that RF-pose has better performance under occlusion. This improvement is attributable to the ability of their method to estimate the pose of the subject through a wall, something that visual sensors fail to do. Further research on RF-sensing was conducted by Niu et al. (2018) with applications to finger gesture recognition, human respiration and chins movement. Their research can be potentially used for applications of autonomous health monitoring and home appliances control. Furthermore, Zhang et al. (2019) used an RF-sensing approach in the proposed system WiDIGR for gait recognition. Guo et al. (2019) claimed that RF-sensing is drawing more attention which can be attributed to being device-free for users, and in contrast to RGB cameras it can work under low light conditions and occlusions.

4.1.4. Subjects

For most research groups there is not enough time and funding to collect data continuously within several years to study fall detection. Due to the rarity of genuine data in fall detection and prediction, Li et al. (2013) have started to hire stunt actors to simulate different kinds of fall. There are also many data sets of falls which are simulated by young healthy students as indicated in the studies by Bourke et al. (2007) and Ma et al. (2014). For obvious reasons elderly subjects cannot be engaged to perform the motion of falls for data collection. For most of the existing data sets, falls are simulated by young volunteers who perform soft falls under the protection of soft mats on the ground. Elderly subjects, however, often have totally different behavior due to less control over the speed of the body. One potential solution could include simulated data sets created using physics engines, such as OpenSim. Previous research (Mastorakis et al., 2007, 2018) have shown that simulated data from OpenSim contributed to an increase in performance to the resulting models. Another solution includes online learning algorithms which adapt to subjects who were not represented in the training data. For instance, Deng et al. (2014) applied the Transfer learning reduced Kernel Extreme Learning Machine (RKELM) approach and showed how they can adapt a trained classifier—based on data sets collected by young volunteers—to the elderly. The algorithm consists of two parts, namely offline classification modeling and online updating modeling, which is used to adapt to new subjects. After the model is trained by labeled training data offline, unlabeled test samples are fed into the pre-trained RKELM classifier and obtain a confidence score. The samples that obtain a confidence score above a certain threshold are used to update the model. In this way, the model is able to adapt to new subjects gradually when new samples are received from new subjects. Namba and Yamada (2018a,b) demonstrated how deep reinforcement learning can be applied to assisting mobile robots, in order to adapt to conditions that were not present in the training set.

4.2. Local Communication Layer (LCL) of Individual Sensors

There are two components which are involved with communication within such systems. Firstly, data collected from different smart sensors are sent to local computing facilities or remote cloud computing. Then, after the final decision is made by these computing platforms, instructions and alarms are sent to appointed caregivers for immediate assistance (El-Bendary et al., 2013).

Protocol of data communication is divided into two categories, namely wireless and wired transmission. For the former, transmission protocols include Zigbee, Bluetooth, Wifi, WiMax, and Cellular network.

Most of the studies that used individual wearable sensors deployed commercially available wearable devices. In those cases, data was communicated by transmission modules built in the wearable products, using mediums such as Bluetooth and cellular networks. In contrast to detection systems using wearable devices, most static vision- and ambient-based studies are connected to smart gateways by wired connections. These approaches are usually applied as static detection methods, so a wired connection is a better choice.

4.3. Information Processing Layer (IPL) of Individual Sensors

4.3.1. Detection Using Threshold-Based and Data-Driven Algorithms

Threshold-based and data-driven algorithms (including machine learning and deep learning) are the two main approaches that have been used for fall detection. Threshold-based approaches are usually used for data coming from individual sensors, such as accelerometers, gyroscopes, and electromyography. Their decisions are made by comparing measured values from concerned sensors to empirically established threshold values. Data driven approaches are more applicable for sensor fusion as they can learn non-trivial non-linear relationships from the data of all involved sensors. In terms of the algorithms used to analyze data collected using wearable devices, Figure 5 demonstrates that there is a significant shift to machine learning based approaches, in comparison to the work conducted between 1998 and 2012. From papers presented between 1998 and 2012, threshold-based approaches account for 71%, while only 4% applied machine learning based methods (Schwickert et al., 2013). We believe that this shift is due to two main reasons. First, the rapid development of affordable sensors and the rise of the Internet-of-Things made it possible to more easily deploy multiple sensors in different applications. As mentioned above the non-linear fusion of multiple sensors can be modeled very well by machine learning approaches. Second, with the breakthrough of deep learning, threshold-based approaches have become even less preferable. Moreover, different types of machine learning approaches have been explored, namely, Bayesian networks, rule-based systems, nearest neighbor-based techniques, and neural networks. These data-driven approaches (Gharghan et al., 2018) show better accuracy and they are more robust in comparison to threshold-based methods. Notable is the fact that data-driven approaches are more resource hungry than threshold-based methods. With the ever advancement of technology, however, this is not a major concern and we foresee that more effort will be invested in this direction.

FIGURE 5
www.frontiersin.org

Figure 5. Different types of methods used in fall detection using individual wearable sensors in the period 1998–2012 based on the survey of Schwickert et al. (2013) and in the period 2014–2020 based on our survey. The term “others” refers to traditional methods that are neither based on threshold nor on machine learning, and the term “N/A” stands for not available and refers to studies whose methods are not clearly defined.

4.3.2. Detection Using Deep Learning

Traditional machine learning approaches determine mapping functions between extracted handcrafted features from raw training data and the respective output labels (e.g., no fall or fall, to keep it simple). The extraction of handcrafted features requires domain expertise and are, therefore, limited to the knowledge of the domain experts. Though such a limitation is imposed, literature shows that traditional machine learning, based on support vector machines, hidden Markov models, and decision trees are still very active in the field of fall detection that uses individual wearable non-visual or ambient sensors (e.g., accelerometer) (Wang et al., 2017a,b; Chen et al., 2018; Saleh and Jeannès, 2019; Wu et al., 2019). For visual sensors the trend has been moving toward deep learning for convolutional neural networks (CNN) (Adhikari et al., 2017; Kong et al., 2019; Han et al., 2020), or LSTM (Shojaei-Hashemi et al., 2018). Deep learning is a sophisticated learning framework that besides the mapping function (as mentioned above and used in traditional machine learning), it also learns the features (in a hierarchy fashion) that characterize the concerned classes (e.g., falls and no falls). This approach has been inspired by the visual system of the mammalian brain (LeCun et al., 2015). In computer vision applications, which take as input images or videos, deep learning has been established as state-of-the-art. In this regard, similar to other computer vision applications, fall detection approaches that rely on vision data have been shifting from traditional machine learning to deep learning in recent years.

4.3.3. Real Time and Alarms

Real-time is a key feature for fall detection systems, especially for commercial products. Considering that certain falls can be fatal or detrimental to the health, it is crucial that the deployed fall detection systems have high computational efficiency, preferably operating in (near) real-time. Below, we comment how the methods proposed in the reviewed literature fit within this aspect.

The percentage of studies applying real-time detection by static visual sensors are lower than that of wearable devices. For the studies using wearable devices, Table 3 illustrates that six out of 20 studies that we reviewed can detect falls and send alarms. There are, however, few studies which demonstrate the ability to process data and send alerts in real-time for work conducted using individual visual sensors. Based on Table 4, one can note that although 40.9% (nine out of 22) of the studies claim that their systems can be used in real-time only one study showed that an alarm can actually be sent in real-time. The following are a couple of reasons why a higher percentage of vision-based systems can not be used in real time. Firstly, visual data is much larger and, therefore, its processing is more time consuming than that of one dimensional signals coming from non-vision-based wearable devices. Secondly, most of the work using vision sensors conducted their experiments with off-line methods, and modules like data transmission were not involved.

4.3.3.1. Summary

• For single-sensor-based fall detection systems most of the studies used data sets that include simulated falls by young and healthy volunteers. Further work is needed to establish whether such simulated falls can be used to detect genuine falls by the elderly.

• The types of sensors utilized in fall detection systems have changed in the past 6 years. For individual wearable sensors, accelerometers are still the most frequently deployed sensors. Static vision-based devices shifted from RGB to RGB-D cameras.

• Data-driven machine learning and deep learning approaches are gaining more popularity especially with vision-based systems. Such techniques may, however, be heavier than threshold-based counterparts in terms of computational resources.

• The majority of proposed approaches, especially those that rely on vision-based sensors, work in offline mode as they cannot operate in real-time. While such methods can be effective in terms of detection, their practical use is debatable as the time to respond is crucial.

5. Sensor Fusion by Sensor Network

5.1. Physiological Sensing Layer (PSL) Using Sensor Fusion

5.1.1. Sensors Deployed in Sensor Networks

In terms of sensor fusion, there are two categories, typically referred to as homogeneous and heterogeneous which take input from three types of sensors, namely wearable, visual, ambient sensors, as shown in Figure 6. Sensor fusion involves using multiple and different signals coming from various devices, which may for instance include, accelerometer, gyroscope, magnetometer, and visual sensors, among others. This is all done to complement the strengths of all devices for the design and development of more robust algorithms that can be used to monitor the health of subjects and detect falls (Spasova et al., 2016; Ma et al., 2019).

FIGURE 6
www.frontiersin.org

Figure 6. Different kinds of individual sensors and sensor networks, including vision-based, wearable, and ambient sensors, along with sensor fusion.

For the visual detection based approaches, the fusion of signals coming from RGB (Charfi et al., 2012), and RGB-D depth cameras along with camera arrays have been studied (Zhang et al., 2014). They showed that such fusion provides more viewpoints of detected locations, and improves the stability and robustness by decreasing false alarms due to occluded falls (Auvinet et al., 2011).

Li et al. (2018) combined accelerometer data from smartphones and Kinect depth data as well as smartphone camera signals. Liu et al. (2014) and Yazar et al. (2014) fused data from infrared sensors with ambient sensors, and data from doppler and vibration sensors separately. Among them, accelerometers and depth cameras (Kinect) are most frequently studied due to their low costs and effectiveness.

5.1.2. Sensor Networks Platform

Most of the existing IoT platforms, such as Microsoft Azure IoT, IBM Watson IoT Platform, and Google Cloud Platform, have not been used in the deployment of fall detection approaches by sensor fusion. In general, research studies on fall detection using sensor fusion are carried out by offline methods and decision fusion approaches. Therefore, in such studies, there is no need for data transmission and storage modules. From Tables 6, 7, one can also observe that most of the time researchers applied their own workstations or personal computers as their platforms, as there was no need for the integration of sensors and real-time analysis in terms of fall detection in off-line mode.

TABLE 6
www.frontiersin.org

Table 6. Fall detection by fusion of wearable sensors from 2014 to 2020.

TABLE 7
www.frontiersin.org

Table 7. Fall detection using fusion of sensor networks from 2014 to 2020.

Some works, such as those in Kwolek and Kepski (2014), Kepski and Kwolek (2014), and Kwolek and Kepski (2016), applied low-power single-board computer development platforms running in Linux, namely PandaBoard, PandaBoard ES, and A13-OlinuXino. A13-OlinuXino is an ARM-based single-board computer development platform, which runs Debian Linux distribution. PandaBoard ES, which is the updated version of PandaBoard, is a single-board computer development platform running at Linux. The PandaBoard ES can run different kinds of Linux-based operating systems, including Android and Ubuntu. It consists of 1 GB of DDR2 SDRAM, dual USB 2.0 ports as well as wired 10/100 Ethernet along with wireless Ethernet and Bluetooth connectivity. Linux is well-known for real-time embedded platforms since it provides various flexible inter-process communication methods, which is quite suitable for fall detection using sensor fusion.

In the research by Kwolek and Kepski (2014, 2016), wearable devices and Kinect were connected to the Pandaboard through Bluetooth and cable, separately. Firstly, data was collected by accelerometers and Kinect sensors, individually, which was then transmitted and stored in a memory card. The procedure of data transmission is asynchronous since there are different sampling rates for accelerometers and Kinect. Finally, all data was grouped together and processed by classification models that detected falls. The authors reported high accuracy rates but could not compare with other approaches since there is no benchmark data set.

Spasova et al. (2016) applied the A13-OlinuXino board as their platform. A standard web camera was connected to it via USB and an infrared camera was connected to the development board via I2C (Inter-Integrated Circuit). Their experiment achieved excellent performance with over 97% sensitivity and specificity. They claim that their system can be applied in real-time with hardware of low-cost and open source software platform.

Despite the available platforms mentioned above, the majority of fall detection studies trained their models in an offline mode with a single sensor on personal computers. The studies in Kwolek and Kepski (2014), Kepski and Kwolek (2014), Kwolek and Kepski (2016), and Spasova et al. (2016) utilized single-board computer platforms in their experiments to demonstrate the efficacy of their approaches. The crucial aspects of scalability and efficiency were not addressed and hence it is difficult to speculate the appropriateness of their methods in real-world applications. We believe that the future trend is to apply an interdisciplinary approach that deploys the data analysis modules on mature cloud platforms, which can provide a stable and robust environment while meeting the exploding demands of commercial applications.

5.1.3. Subjects and Data Sets

Although some groups devoted their efforts to acquire data of genuine falls, most researchers used data that contained simulated falls. We know that monitoring the lives of elderly people and waiting to capture real falls is very sensitive and time consuming. Having said that though, with regards to sensor fusion by wearable devices, there have been some attempts which have tried to build data sets of genuine data in real life. FARSEEING (Fall Repository for the design of Smart and self-adaptive Environments prolonging Independent living) is one such data set (Klenk et al., 2016). It is actually the largest data set of genuine falls in real life, and is open to public research upon request on their website. From 2012 to 2015, more than 2,000 volunteers have been involved, and more than 300 real falls have been collected under the collaboration of six institutions3.

As for the fusion by visual sensors and the combination of other non-wearable sensors, it becomes quite hard to acquire genuine data in real life. There was one group which tried to collect real data by visual sensors, but only nine real falls by elderly (Demiris et al., 2008) were captured during several years. The availability of only nine falls is too limited to train a meaningful model. As an alternative, Stone and Skubic (2015) hired trained stunt actors to simulate different kinds of falls and made a benchmark data set with 454 falls including 9 real falls by elderly.

5.2. Local Communication Layer (LCL) Using Sensor Fusion

Data transmission for fall detection using sensor networks can be done in different ways. In particular, Bluetooth (Pierleoni et al., 2015; Yang et al., 2016), Wi-Fi, ZigBee (Hsieh et al., 2014), cellular network using smart phones (Chen et al., 2018) and smart watches (Kao et al., 2017), as well as wired connection have all been explored. In studies that used wearable devices, most of them applied wireless methods, such as Bluetooth, which allowed the subject to move unrestricted.

Currently, when it comes to wireless sensors, Bluetooth has become probably the most popular communication protocol and it is widely used in existing commercial wearable products such as Shimmer. In the work by Yang et al. (2016), data is transmitted to a laptop in real-time by a Bluetooth module that is built in a commercial wearable device named Shimmer 2R. The sampling frame rate can be customized, and they chose to work with the 32-Hz sampling rate instead of the default sampling rate of 51.2-Hz. At high sampling frequencies, packet loss can occur and higher sampling rate also means higher energy consumption. Bluetooth is also applied to transmit data in non-commercial wearable devices. For example, Pierleoni et al. (2015) customized a wireless sensor node, where sensor module, micro-controller, Bluetooth module, battery, mass-storage unit, and wireless receiver were integrated within a prototype device of size 70–45–30 mm. Zigbee was used to transmit data in the work by Hsieh et al. (2014). In Table 8, we compare different kinds of wireless communication protocols.

TABLE 8
www.frontiersin.org

Table 8. Comparison of different kinds of communication protocol.

As for the data transmission using vision-based and ambient-based approaches, wired options are usually preferred. In the work by Spasova et al. (2016), a standard web camera was connected to an A13-OlinuXino board via USB and an infrared camera was connected to the development board via I2C (Inter-Integrated Circuit). Data and other messages were exchanged within the smart gateways through the internet.

For sensor fusion using different types of sensors, both wireless and cabled methods were utilized because of data variety. In the work by Kwolek and Kepski (2014, 2016), wearable devices and Kinect were connected to the Pandaboard through Bluetooth and cable, separately. Kinect was connected to a PC using USB interface and smart phones were connected by wireless methods (Li et al., 2018). These two types of sensor, smartphone and Kinect, were first used separately to monitor the same events and the underlying methods that processed their signals sent their output to a Netty server through the Internet where another method was used to fuse the outcomes of both methods to come to a final decision of whether the involved individual has fallen or not.

In the studies by Kwolek and Kepski (2014, 2016), accelerometers and Kinect cameras were connected to a pandaboard through Bluetooth and USB connections. Then, the final decision was made based on the data collected from the two sensors.

5.3. Information Processing Layer (IPL) Using Sensor Fusion

5.3.1. Methods of Sensor Fusion

Speaking of the fusion of different sensors, there are several criteria to group them. Yang and Yang (2006) and Tsinganos and Skodras (2018) grouped them into three categories, namely direct data fusion, feature fusion, and decision fusion. We divide sensor fusion techniques into four groups as shown in Figure 7, which we refer to as fusion with partial sensors, direct data fusion, feature fusion, and decision fusion.

FIGURE 7
www.frontiersin.org

Figure 7. Four kinds of sensor fusion methods including partial fusion, feature fusion, decision fusion, and data fusion. Partial fusion means that a subset of sensors are deployed to make decisions, while the other types of fusion techniques use all sensors as input.

For the partial fusion, although multiple sensors are deployed, only one sensor is used to take the final decision, such as the work by Ma et al. (2019). They used an RGB and a thermal camera to conduct their experiments, with the thermal camera being used only for the localization of faces. Falls were eventually detected only based on the data collected from the regular RGB cameras. A similar approach was applied by Spasova et al. (2016), where an infrared camera was deployed to confirm the presence of the subject and the data produced by the RGB camera was used to detect falls. There are also other works that used wearable devices that deployed the sensors at different stages. For instance, in (Kepski and Kwolek, 2014; Kwolek and Kepski, 2014) a fall detection system was built by utilizing a tri-axial accelerometer and an RGB-D camera. The accelerometer was deployed to detect the motion of the subject. If the measured signal exceeded a given threshold then the Kinect was activated to capture the ongoing event.

The second approach of sensor fusion is known as feature fusion. In such an approach, feature extraction takes places on signals that come from different sensors. Then all features are merged into long feature vectors and used to train classification models. Most of the studies that we reviewed applied feature fusion for wearable-based fall detection systems. Many commercial products of wearable devices, sensors like accelerometers, gyroscope, magnetometer are built in one device. Data from these sensors is homogeneous synchronous with the same frequency and transmitted with built-in wireless modules. Having signals producing data with the synchronized frequency simplifies the fusion of data. Statistical features, such as mean, maximum, standard deviation, correlation, spectral entropy, spectral, sum vector magnitude, the angle between y-axis and vertical direction, and differential sum vector magnitude centroid can be determined from the signals of accelerometers, magnetometers, and gyroscopes, and used as features to train a classification model that can detect different types of falls (Yang et al., 2016; de Quadros et al., 2018; Gia et al., 2018).

Decision fusion is the third approach, where a chain of classifiers is used to come to a decision. A typical arrangement is to have a classification model that takes input from one type of sensor, another model that takes input from another sensor, and in turn the outputs of these two models are used as input to a third classification model that takes the final decision. Li et al. (2018) explored this approach with accelerometers embedded in smart phones and Kinect sensors. Ozcan and Velipasalar (2016) deployed an accelerometer and an RGB camera for the detection of falls. Different sensors, such as accelerometer, RGB and RGB-D cameras were deployed in these studies. Decisions are made separately based on the individual sensors, and then the final decision is achieved by combining the individual sensors.

The final approach is data fusion. This is achieved by first fusing the data from different sensors and perform feature extraction from the fused data. This is in contrast to feature fusion where data from these sensors is homogeneous synchronous with the same frequency. Data fusion can be applied to different sensors with different sampling frequency and data characteristics. Data from various sensors can be synchronized and combined directly for some sensors of different types. Because of the difference in sampling rate between the Kinect camera and wearable sensors, it is challenging to conduct feature fusion directly. In order to mitigate this difficulty, the transmission and exposure times of the Kinect camera are adapted to synchronize the RGB-D data with that of wearable sensors by an ad-hoc acquisition software, as was done by Gasparrini et al. (2015).

Ozcan and Velipasalar (2016) used both partial and feature fusion. They divided the procedure in two stages. In the first stage, only the accelerometer was utilized to indicate a potential fall, then the Kinect camera activates after the accelerometer flagged a potential fall. Features from both the Kinect camera and accelerometer were then extracted to classify activities of fall or non-fall in the second stage.

5.3.2. Machine Learning, Deep Learning, and Deep Reinforcement Learning

In terms of fall detection techniques based on wearable sensor fusion, the explored methods include threshold-based, traditional machine learning, and deep learning. The latter two are the most popular due to their robustness. The research by Chelli and Pätzold (2019) applied both traditional machine learning [kNN, QSVM, Ensemble Bagged Tree (EBT)] and deep learning. Their experiments were divided into two parts, namely activity recognition and fall detection. For the former, their experiments showed that traditional machine learning and deep learning outperformed other approaches, which showed 94.1 and 93.2% accuracy, respectively. Queralta et al. (2019) applied a long short-term memory (LSTM) approach, where wearable nodes including accelerometer, gyroscope, and magnetometer were embedded in a low power wide area network, with combined edge and fog computing. The LSTM algorithm is a type of recurrent neural network aimed at solving long sequence learning tasks. Their system achieved an average recall of 95% while providing a real-time solution of fall detection running on cloud platforms. Another example is the work by Nukala et al. (2014) who fused the measurements of accelerometers and gyroscopes and applied an Artificial Neural Network (ANN) for the modeling of fall detection.

As for visual sensor based fusion techniques, the limited studies that were included in our survey applied either traditional machine learning or deep learning (Espinosa et al., 2019; Ma et al., 2019) approaches. Fusion of multiple visual sensors from a public data set was presented by Espinosa et al. (2019), where a 2D CNN was trained to classify falls during daily life activities.

Another approach is reinforcement learning (RL), which is a growing branch in machine learning, and is gaining popularity in the fall detection field as well. Deep reinforcement learning (DRL) combines the advantages of deep learning and reinforcement learning, and has already shown its benefits in fall prevention (Namba and Yamada, 2018a,b; Yang, 2018) and fall detection (Yang, 2018). Namba and Yamada (2018a) proposed a fall risk prevention approach by assisting robots for the elderly living independently. Images and movies with the location information of accidents were collected. Most conventional machine learning and deep learning methods are, however, challenged when the operational environment changes. This is due to their data-driven nature that allows them to learn how to become robust mostly in the same environments where they were trained.

5.3.3. Data Storage and Analysis

Typical data storage devices include SD cards, local storage on the integration device, or remote storage on the cloud. For example, some studies used the camera and accelerometer in smartphones, and stored the data on the local storage of the smarphones (Ozcan and Velipasalar, 2016; Shi et al., 2016; Medrano et al., 2017). Other studies applied off-line methods and stored data in their own computer, and could be processed at a later stage. Alamri et al. (2013) argue that sensor-cloud will become the future trend because cloud platforms can be more open and more flexible than local platforms, which have limited local storage and processing power.

5.4. User Application Layer (UAL) of Sensor Fusion

Due to the rapid development of miniature bio-sensing devices, there has been a booming development of wearable sensors and other fall detection modules. Wearable modules, such as Shimmer, embedded with sensing sensors, communication protocols, and sufficient computational ability are available as affordable commercial products. For example, some wearable-based applications have been applied to the detection of falls and for monitoring health, in general. The target of the wearable devices is to wear and forget. Taking as an example the electronic skins (e-skins) that adhere to the body surface, clothing-based or accessory-based devices where proximity is sufficient. To fulfill the target of wearing and forgetting, many efforts have been put into the study of wearable systems, such as the My Heart project (Habetha, 2006), the Wearable Health Care System (WEALTHY) project (Paradiso et al., 2005), the Medical Remote Monitoring of clothes (MERMOTH) project (Luprano, 2006), and the project by Pandian et al. (2008). Some wearable sensors are also developed specifically to address fall detection. Shibuya et al. (2015) used a wearable wireless gait sensor for the detection of falls. More and more research work use existing commercial wearable products, which includes function of data transmission and sending alarms when falls are detected.

5.4.1. Summary

• Due to the sampling frequency and data characteristic, there are two main categories for sensor fusion. As shown in Tables 6, 7, studies by sensor fusion are divided into fusion by sensor from the same category (e.g., fusion of wearable sensors, fusion of visual sensors, and fusion of ambient sensors) and fusion of sensors from different categories.

• Subjects in fall detection studies using sensor networks are still young and healthy volunteers, which is similar to that of individual sensors. Only one research adopted mixed data with simulated and genuine data.

• More wearable-based approaches are embedded with IoT platforms than that of vision-based approaches because data transmission and storage modules are built in existing commercial products.

• For the research combining sensors from different categories, the combination of accelerometer and Kinect camera is the most popular method.

• Partial fusion, data fusion, feature fusion, and decision fusion are four main methods of sensor fusion. Among them, feature fusion is the most popular approach, followed by decision fusion. For fusion using non-vision wearable sensors, most of the studies that we reviewed applied feature fusion, while decision fusion is the most appealing one for fusing sensors from different categories.

6. Security and Privacy

Because data generated by autonomous monitoring systems are security-critical and privacy-sensitive, there is an urgent demand to protect user's privacy and prevent these systems from being attacked. Cyberattacks on the autonomous monitoring systems may cause physical or mental damages and even threaten the lives of subjects under monitoring.

6.1. Security

In this survey we approached the systems of fall detection from different layers, including Physiological Sensing Layer (PSL), Local Communication Layer (LCL), Information Processing Layer (IPL), Internet Application Layer (IAL), and User Application Layer (UAL). Every layer faces security issues. For instance, information may leak in the LCL during data transmission, along with potential vulnerabilities with cloud storage and processing facility. Based on the literature that we report in Tables 37, most of the studies in the field of fall detection do not address security matters. Only few studies (Edgcomb and Vahid, 2012; Mastorakis and Makris, 2014; Ma et al., 2019) take privacy into consideration. Because of the distinct characteristics of wired and wireless transmission, it is still an open problem to find a comprehensive security protocol which can cover the security issues in both wired and wireless data transmission and storage (Islam et al., 2015).

6.2. Privacy

As mentioned above, privacy is one of the most important issue for users of autonomous health monitoring systems. Methods to protect privacy are dependent on the type of sensor used. Not all sensors tend to suffer from the issues of privacy equally. For example, vision-based sensors, like RGB cameras, are more vulnerable than wearable sensors, such as accelerometers, in terms of privacy. In the case of a detection system that uses only wearable sensors, problems of privacy are not as critical as systems involved with visual sensors.

In order to address the privacy concerns associated with RGB cameras some researchers proposed to mitigate them by blurring and distorting the appearances as post-processing steps in the application layer (Edgcomb and Vahid, 2012). An alternative way is to address the privacy issue in the design stage, as suggested by Ma et al. (2019). They investigated an optical level anonymous image sensing system. A thermal camera was deployed to locate faces and an RGB camera was used to detect falls. The location of the subject's face was used to generate a mask pattern on a spatial light modulator to control the light entering the RGB camera. Faces of subjects were blurred by blocking the visible light rays using the mask pattern on the spatial light modulator.

The infrared camera is another sensor which could protect the privacy of subjects. Mastorakis and Makris (2014) investigated an infrared camera built in a Kinect sensor. It only captures the thermal distribution of subjects and there is no information on the subject's appearance and living environment involved. Other vision-based sensors which could protect privacy are depth cameras. The fact they only capture depth information has made them more popular than RGB cameras.

As for the research of fall detection using sensor networks, different kinds of data are collected when more sensors are involved. Because of more data collection and transfer involved, the whole fall detection system by sensor fusion becomes more complicated and it makes the protection of privacy and security even harder. There is a trade-off between privacy and benefits of autonomous monitoring systems. The aim is to keep improving the algorithms while keeping the privacy and security issues to a minimum. This is the only way to make such systems socially acceptable.

7. Projects and Applications Around Fall Detection

Approaches of fall detection evolve from personal emergency response systems (PERS) to intelligent automatic ones. One of the early fall detection systems sends an alarm by the PERS push-button, but it may fail when the concerned person loses consciousness or is too weak to move (Leff, 1997). Numerous attempts have been made to monitor not only falls but also other specific activities in autonomous health monitoring. Many projects have been conducted to develop applications of autonomous health monitoring, including fall detection, prediction, and prevention. Some of the aforementioned studies were promoted as commercial products. Different sensors from wearable sensors, visual sensors, and ambient sensors are deployed as commercial applications for fall detection. Among them, more wearable sensors have been developed as useful applications. For example, a company named Shimmer has developed 7 kinds of wearable sensing products aiming at autonomous health monitoring. One of the products is the Shimmer3 IMU Development Kit. It is a wearable sensor node including a sensing module, data transmission module, receiver, and it has been used by Mahmud and Sirat (2015) and Djelouat et al. (2017). The iLife fall detection sensor is developed by AlertOne4, which provides the service of fall detection and one-button alert system. Smartwatch is another commercial solution for fall detection. Accelerometers embedded in smartwatches have been studied to detect falls (Kao et al., 2017; Wu et al., 2019). Moreover, Apple Watch Series 4 and later versions are equipped with the fall detection function, and it can help the consumer to connect to the emergency service. Although there are few specific commercial fall detection products based on RGB cameras, the relevant studies also show a promising future in the field. There are open source solutions provided by Microsoft using Kinect which could detect falls in real time and have the potential to be deployed as commercial products. As for ambient sensors, Linksys Aware apply tri-band mesh WiFi systems to fall detection, and they provide a premium subscription service as a commercial motion detection product. CodeBlue, a Harvard University research project, also focused on developing wireless sensor networks for medical applications (Lorincz et al., 2004). The MIThril project (DeVaul et al., 2003) is a next-generation wearable research platform developed by researchers at the MIT Media Lab. They made their software open source and hardware specifications available to the public.

The Ivy project (Pister et al., 2003) is a sensor network infrastructure from the Berkeley College of Engineering, University of California. The project aims to develop a sensor network system to provide assistance for the elderly living independently. Using a sensor network with fixed sensors and mobile sensors worn on the body, anomalies by the concerned elderly can be detected. Once falls are detected, the system sends alarms to caregivers to respond urgently.

A sensor network was built in 13 apartments in TigerPlace, which is an aging in place for people of retirement in Columbia, Missouri, and continuous data was collected for 3,339 days (Demiris et al., 2008). The sensor network with simple motion sensors, video sensors, and bed sensors that capture sleep restlessness and pulse and respiration levels, were installed in some apartments of 14 volunteers. Activities of 16 elderly people in TigerPlace, whose age range from 67 to 97, were recorded continuously and 9 genuine falls were captured. Based on the data set, Li et al. (2013) developed a sensor fusion algorithm. which achieved low rate of false alarms and a high detection rate.

8. Trends and Open Challenges

8.1. Trends

8.1.1. Sensor Fusion

There seems to be a general consensus that sensor fusion provides a more robust approach for the detection of elderly falls. The use of various sensors may complement each other in different situations. Thus, instead of relying on only one sensor, which may be unreliable if the conditions are not suitable for that sensor, the idea is to rely on different types of sensor that together can capture reliable data in various conditions. This results in a more robust system that can keep false alarms to a minimum while achieving high precision.

8.1.2. Machine Learning, Deep Learning and Deep Reinforcement Learning

Conventional machine learning approaches have been widely applied in fall detection and activity recognition, and results outperform those of threshold-based methods in studies that use wearable sensors. Deep learning is a subset of machine learning, which is concerned with artificial neural networks inspired by the mammalian brain. Approaches of deep learning are gaining popularity especially for visual sensors and sensor fusion and are becoming the state-of-the-art for fall detection and other activity recognition. Deep reinforcement learning is another promising research direction for fall detection. Reinforcement learning is inspired by the psychological neuro-scientific understandings of humans which can adapt and optimize decisions in a changing environment. Deep reinforcement learning combines advantages of deep learning, and reinforcement learning which can provide alternatives for detection that can adapt to the changing condition without sacrificing accuracy and robustness.

8.1.3. Fall Detection Systems on 5G Wireless Networks

5G is a softwarized and virtualized wireless network, which includes both a physical network and software virtual network functions. In comparison to 4G networks, 5th generation mobile introduces the ability of data transmission with high speed and low latency, which could contribute to the development of fall detection by IoT systems. Firstly, 5G is envisioned to become an important and universal communication protocol for IoT. Secondly, 5G cellular can be used for passive sensing approaches. Different from other kinds of RF-sensing approaches (e.g., WiFi or radar) which are aimed for short-distance indoor fall detection, the 5G wireless network can be applied to both indoor and outdoor scenarios as a pervasive sensing method. This type of network has already been successfully investigated by Gholampooryazdi et al. (2017) for the detection of crowd-size, presence detection, and walking speed, and their experiments showed accuracy of 80.9, 92.8, and 95%, respectively. Thirdly, we expect that 5G as a network is going to become a highly efficient and accurate platform to achieve better performance of anomaly detection. Smart networks or systems powered by 5G IoT and deep learning can be applied not only in fall detection systems, but also in other pervasive sensing and smart monitoring systems which assist elderly groups to live independently with high-quality life.

8.1.4. Personalized or Simulated Data

El-Bendary et al. (2013) and Namba and Yamada (2018b) have proposed to include historical medical and behavioral data of individuals along with sensor data. This allowed the enrichment of the data and consequently to make better informed decisions. This innovative perspective allows a more personalized approach as it uses the health profile of the concerned individual and it has the potential to become a trend also in this field. Another trend could be the way data sets are created to evaluate systems for fall detection. Mastorakis et al. (2007, 2018) applied the skeletal model simulated in Opensim, which is an open-source software developed by Stanford University. It can simulate different kinds of pre-defined skeletal models. They acquired 132 videos of different types of falls, and trained their own algorithms based on those models. The high results that they report indicate that the simulated falls by OpenSim are very realistic and, therefore, effective for training a fall detection model. Physics engines, like Opensim, can simulate customized data based on the height and age of different subjects and it offers the possibility of new directions to detect falls. Another solution, which can potentially address the scarcity of data, is to develop algorithms that can be adapted to subjects that were not part of the original training set (Deng et al., 2014; Namba and Yamada, 2018a,b) as we described in section 4.1.4.

8.1.5. Fog Computing

As to architecture is concerned, Fog computing offers the possibility to distribute different levels of processing across the involved edge devices in a decentralized way. Smart devices that can carry out some processing and that can communicate directly with each other are more attractive for (near) real-time processing as opposed to systems based on cloud computing (Queralta et al., 2019). An example of such smart devices include the Intel® RealSense™ depth camera, which includes a 28 nanometer (nm) processor to compute real-time depth images.

8.2. Open Challenges

The topic of fall detection has been studied extensively during the past two decades and many attempts have been proposed. The rapid development of new technologies keeps this topic very active in the research community. Although much progress has been made, there are still various open challenges, which we discuss below.

1. The rarity of data of real falls: There is no convincing public data set which could provide a gold standard. Many simulated data sets by individual sensors are available, but it is debatable whether models trained on data collected by young and healthy subjects can be applied to elderly people in real-life scenarios. To the best of our knowledge, only Liu et al. (2014) used a data set with nine real falls along with 445 simulated ones. As for data sets with multiple sensors, the data sets are even scarcer. There is, therefore, an urgent need to create a benchmark data set of data coming from multiple sensors.

2. Detection in real-time: The attempts that we have seen in the literature are all based on offline methods that detect falls. While this is an important step, it is time that research starts focusing more on real-time systems that can be applied in the real-world.

3. Security and privacy: We have seen little attention to the security and privacy concerned with fall detection approaches. Security and privacy is therefore another topic which to our opinion must be addressed in cohesion with fall detection methods.

4. Platform of sensor fusion: It is still a novice topic with a lot of potential. Studies so far have treated this topic to a minimum as they mostly focused on the analytics aspect of the problem. In order to bring solutions closer to the market more holistic studies are needed to develop full information systems that can deal with the management and transmission of data in an efficient, effective and secure way.

5. Limitation of location: Some sensors, such as visual ones, have limited capability because they are fixed and static. It is necessary to develop fall detection systems which can be applied to controlled (indoor) and uncontrolled (outdoor) environments.

6. Scalability and flexibility: With the increasing number of affordable sensors there is a crucial necessity to study the scalability of fall detection systems especially when inhomogeneous sensors are considered (Islam et al., 2015). There is an increasing demand for scalable fall detection approaches that do not sacrifice robustness or security. When considering cloud-based trends, fall detection modules, such as data transmission, processing, applications, and services, should be configurable and scalable in order to adapt to the growth of commercial demands. Cloud-based systems enable more scalability of health monitoring systems at different levels as the need for resources of both hardware and software level changes with time. Cloud-based systems can add or remove sensors and services with little effort on the architecture (Alamri et al., 2013).

9. Summary and Conclusions

In this review we give an account on fall detection systems from a holistic point of view that includes data collection, data management, data transmission, security and privacy as well as applications.

In particular we compare approaches that rely on individual sensors with those that are based on sensor networks with various fusion techniques. The survey provides a description of the components of fall detection and it is aimed to give a comprehensive understanding of physical elements, software organization, working principles, techniques, and arrangement of different components that concern fall detection systems.

We draw the following conclusions.

1. The sensors and algorithms proposed during the past 6 years are very different in comparison to the research before 2014. Accelerometers are still the most popular sensors in wearable devices, while Kinect took the place of the RGB camera and became the most popular visual sensor. The combination of Kinect and accelerometer is turning out to be the most sought after.

2. There is not yet a benchmark data set on which fall detection systems can be evaluated and compared. This creates a hurdle in advancing the field. Although there has been an attempt to use middle-age subjects to simulate falls (Kangas et al., 2008), there are still differences in behavior between the elderly and middle-aged subjects.

3. Sensor fusion seems to be the way forward. It provides more robust solutions in fall detection systems but come with higher computational costs when compared to those that rely on individual sensors. The challenge is therefore to mitigate the computational costs.

4. Existing studies focus mainly on the data analytics aspect and do not give too much attention to IoT platforms in order to build full and stable systems. Moreover, the effort is put on analyzing data in offline mode. In order to bring such systems to the market, more effort needs to be invested in building all the components that make a robust, stable, and secure system that allows (near) real-time processing and that gains the trust of the elderly people.

The detection of elderly falls is an example of the potential of autonomous health monitoring systems. While the focus here was on elderly people, the same or similar systems can be applicable to people with mobility problems. With the ongoing development of IoT devices, autonomous health monitoring and assistance systems that rely on such devices seems to be the key for the detection of early signs of physical and cognitive problems that can range from cardiovascular issues to mental disorders, such as Alzheimer's and dementia.

Author Contributions

GA and XW conceived and planned the paper. XW wrote the manuscript in consultation with GA and JE. All authors listed in this paper have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Funding

XW holds a fellowship (grant number: 201706340160) from the China Scholarship Council supplemented by the University of Groningen. The support provided by the China Scholarship Council (CSC) during the study at the University of Groningen is acknowledged.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1. ^https://chinapower.csis.org/aging-problem/

2. ^https://www.google.com/trends

3. ^1. Robert-Bosch Hospital (RBMF), Germany; 2. University of Tübingen, Germany; 3. University of Nürnberg/Erlangen, Germany; 4. German Sport University Cologne, Germany; 5. Bethanien-Hospital/Geriatric Center at the University of Heidelberg, Germany; 6. University of Auckland, New Zealand.

4. ^https://www.alert-1.com/

References

1. (2011). Sdufall. Available online at: http://www.sucro.org/homepage/wanghaibo/SDUFall.html

2. (2014). Urfd. Available online at:https://sites.google.com/view/haibowang/home

3. Abbate S., Avvenuti M., Bonatesta F., Cola G., Corsini P., and Vecchio A. (2012). A smartphone-based fall detection system. Pervas. Mobile Comput. 8, 883–899. doi: 10.1016/j.pmcj.2012.08.003

CrossRef Full Text | Google Scholar

4. Adhikari K., Bouchachia H., and Nait-Charif H. (2017). “Activity recognition for indoor fall detection using convolutional neural network,” in 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA) (Nagoya: IEEE), 81–84. doi: 10.23919/MVA.2017.7986795

CrossRef Full Text | Google Scholar

5. Akagündüz E., Aslan M., Şengür A., Wang H., and İnce M. C. (2017). Silhouette orientation volumes for efficient fall detection in depth videos. IEEE J. Biomed. Health Inform. 21, 756–763. doi: 10.1109/JBHI.2016.2570300

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Alamri A., Ansari W. S., Hassan M. M., Hossain M. S., Alelaiwi A., and Hossain M. A. (2013). A survey on sensor-cloud: architecture, applications, and approaches. Int. J. Distribut. Sensor Netw. 9, 917923. doi: 10.1155/2013/917923

CrossRef Full Text | Google Scholar

7. Amini A., Banitsas K., and Cosmas J. (2016). “A comparison between heuristic and machine learning techniques in fall detection using kinect v2,” in 2016 IEEE International Symposium on Medical Measurements and Applications (MeMeA) (Benevento: IEEE), 1–6. doi: 10.1109/MeMeA.2016.7533763

CrossRef Full Text | Google Scholar

8. Aslan M., Sengur A., Xiao Y., Wang H., Ince M. C., and Ma X. (2015). Shape feature encoding via fisher vector for efficient fall detection in depth-videos. Applied Soft. Comput. 37, 1023–1028. doi: 10.1016/j.asoc.2014.12.035

CrossRef Full Text | Google Scholar

9. Auvinet E., Multon F., Saint-Arnaud A., Rousseau J., and Meunier J. (2011). Fall detection with multiple cameras: an occlusion-resistant method based on 3-D silhouette vertical distribution. IEEE Trans. Inform. Technol. Biomed. 15, 290–300. doi: 10.1109/TITB.2010.2087385

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Aziz O., Musngi M., Park E. J., Mori G., and Robinovitch S. N. (2017). A comparison of accuracy of fall detection algorithms (threshold-based vs. machine learning) using waist-mounted tri-axial accelerometer signals from a comprehensive set of falls and non-fall trials. Med. Biol. Eng. Comput. 55, 45–55. doi: 10.1007/s11517-016-1504-y

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Bian Z.-P., Hou J., Chau L.-P., and Magnenat-Thalmann N. (2015). Fall detection based on body part tracking using a depth camera. IEEE J. Biomed. Health Inform. 19, 430–439. doi: 10.1109/JBHI.2014.2319372

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Bloom D. E., Boersch-Supan A., McGee P., and Seike A. (2011). Population aging: facts, challenges, and responses. Benefits Compens. Int. 41, 22.

Google Scholar

13. Boulard L., Baccaglini E., and Scopigno R. (2014). “Insights into the role of feedbacks in the tracking loop of a modular fall-detection algorithm,” in 2014 IEEE Visual Communications and Image Processing Conference (Valletta: IEEE), 406–409. doi: 10.1109/VCIP.2014.7051592

CrossRef Full Text | Google Scholar

14. Bourke A., O'brien J., and Lyons G. (2007). Evaluation of a threshold-based tri-axial accelerometer fall detection algorithm. Gait Post. 26, 194–199. doi: 10.1016/j.gaitpost.2006.09.012

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Bourke A. K., and Lyons G. M. (2008). A threshold-based fall-detection algorithm using a bi-axial gyroscope sensor. Med. Eng. Phys. 30, 84–90. doi: 10.1016/j.medengphy.2006.12.001

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Cai Z., Han J., Liu L., and Shao L. (2017). RGB-D datasets using Microsoft Kinect or similar sensors: a survey. Multimedia Tools Appl. 76, 4313–4355. doi: 10.1007/s11042-016-3374-6

CrossRef Full Text | Google Scholar

17. Charfi I., Miteran J., Dubois J., Atri M., and Tourki R. (2012). Definition and performance evaluation of a robust SVM based fall detection solution. SITIS 12, 218–224. doi: 10.1109/SITIS.2012.155

CrossRef Full Text | Google Scholar

18. Chaudhuri S., Thompson H., and Demiris G. (2014). Fall detection devices and their use with older adults: a systematic review. J. Geriatr. Phys. Ther. 37, 178. doi: 10.1519/JPT.0b013e3182abe779

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Chelli A., and Pätzold M. (2019). A machine learning approach for fall detection and daily living activity recognition. IEEE Access 7, 38670–38687. doi: 10.1109/ACCESS.2019.2906693

CrossRef Full Text | Google Scholar

20. Chen C., Jafari R., and Kehtarnavaz N. (2015). “UTD-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor,” in 2015 IEEE International Conference on Image Processing (ICIP) (Quebec City: IEEE), 168–172. doi: 10.1109/ICIP.2015.7350781

CrossRef Full Text | Google Scholar

21. Chen C., Jafari R., and Kehtarnavaz N. (2017a). A survey of depth and inertial sensor fusion for human action recognition. Multimedia Tools Appl. 76, 4405–4425. doi: 10.1007/s11042-015-3177-1

CrossRef Full Text | Google Scholar

22. Chen K.-H., Hsu Y.-W., Yang J.-J., and Jaw F.-S. (2017b). Enhanced characterization of an accelerometer-based fall detection algorithm using a repository. Instrument. Sci. Technol. 45, 382–391. doi: 10.1080/10739149.2016.1268155

CrossRef Full Text | Google Scholar

23. Chen K.-H., Hsu Y.-W., Yang J.-J., and Jaw F.-S. (2018). Evaluating the specifications of built-in accelerometers in smartphones on fall detection performance. Instrument. Sci. Technol. 46, 194–206. doi: 10.1080/10739149.2017.1363054

CrossRef Full Text | Google Scholar

24. Chua J.-L., Chang Y. C., and Lim W. K. (2015). A simple vision-based fall detection technique for indoor video surveillance. Signal Image Video Process. 9, 623–633. doi: 10.1007/s11760-013-0493-7

CrossRef Full Text | Google Scholar

25. Daher M., Diab A., El Najjar M. E. B., Khalil M. A., and Charpillet F. (2017). Elder tracking and fall detection system using smart tiles. IEEE Sens. J. 17, 469–479. doi: 10.1109/JSEN.2016.2625099

CrossRef Full Text | Google Scholar

26. Dai J., Bai X., Yang Z., Shen Z., and Xuan D. (2010). “PerfallD: a pervasive fall detection system using mobile phones,” in 2010 8th IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops) (Mannheim: IEEE), 292–297.

Google Scholar

27. de Araújo Í. L., Dourado L., Fernandes L., Andrade R. M. C., and Aguilar P. A. C. (2018). “An algorithm for fall detection using data from smartwatch,” in 2018 13th Annual Conference on System of Systems Engineering (SoSE) (Paris: IEEE), 124–131. doi: 10.1109/SYSOSE.2018.8428786

CrossRef Full Text | Google Scholar

28. de Quadros T., Lazzaretti A. E., and Schneider F. K. (2018). A movement decomposition and machine learning-based fall detection system using wrist wearable device. IEEE Sens. J. 18, 5082–5089. doi: 10.1109/JSEN.2018.2829815

CrossRef Full Text | Google Scholar

29. Demiris G., Hensel B. K., Skubic M., and Rantz M. (2008). Senior residents' perceived need of and preferences for “smart home” sensor technologies. Int. J. Technol. Assess. Health Care 24, 120–124. doi: 10.1017/S0266462307080154

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Deng W.-Y., Zheng Q.-H., and Wang Z.-M. (2014). Cross-person activity recognition using reduced kernel extreme learning machine. Neural Netw. 53, 1–7. doi: 10.1016/j.neunet.2014.01.008

PubMed Abstract | CrossRef Full Text | Google Scholar

31. DeVaul R., Sung M., Gips J., and Pentland A. (2003). “Mithril 2003: applications and architecture,” in Null (White Plains, NY: IEEE), 4. doi: 10.1109/ISWC.2003.1241386

CrossRef Full Text | Google Scholar

32. Diraco G., Leone A., and Siciliano P. (2010). “An active vision system for fall detection and posture recognition in elderly healthcare,” in 2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010) (Dresden: IEEE), 1536–1541. doi: 10.1109/DATE.2010.5457055

CrossRef Full Text | Google Scholar

33. Djelouat H., Baali H., Amira A., and Bensaali F. (2017). “CS-based fall detection for connected health applications,” in 2017 Fourth International Conference on Advances in Biomedical Engineering (ICABME) (Beirut: IEEE), 1–4. doi: 10.1109/ICABME.2017.8167540

CrossRef Full Text | Google Scholar

34. Edgcomb A., and Vahid F. (2012). Privacy perception and fall detection accuracy for in-home video assistive monitoring with privacy enhancements. ACM SIGHIT Rec. 2, 6–15. doi: 10.1145/2384556.2384557

CrossRef Full Text | Google Scholar

35. El-Bendary N., Tan Q., Pivot F. C., and Lam A. (2013). Fall detection and prevention for the elderly: a review of trends and challenges. Int. J. Smart Sens. Intell. Syst. 6. doi: 10.21307/ijssis-2017-588

CrossRef Full Text | Google Scholar

36. Espinosa R., Ponce H., Gutiérrez S., Martínez-Villaseñor L., Brieva J., and Moya-Albor E. (2019). A vision-based approach for fall detection using multiple cameras and convolutional neural networks: a case study using the up-fall detection dataset. Comput. Biol. Med. 115:103520. doi: 10.1016/j.compbiomed.2019.103520

PubMed Abstract | CrossRef Full Text | Google Scholar

37. Feng W., Liu R., and Zhu M. (2014). Fall detection for elderly person care in a vision-based home surveillance environment using a monocular camera. Signal Image Video Process. 8, 1129–1138. doi: 10.1007/s11760-014-0645-4

CrossRef Full Text | Google Scholar

38. Gasparrini S., Cippitelli E., Gambi E., Spinsante S., Wåhslén J., Orhan I., et al. (2015). “Proposal and experimental evaluation of fall detection solution based on wearable and depth data fusion,” in International Conference on ICT Innovations (Ohrid: Springer), 99–108. doi: 10.1007/978-3-319-25733-4_11

CrossRef Full Text | Google Scholar

39. Gasparrini S., Cippitelli E., Spinsante S., and Gambi E. (2014). A depth-based fall detection system using a kinect® sensor. Sensors 14, 2756–2775. doi: 10.3390/s140202756

PubMed Abstract | CrossRef Full Text | Google Scholar

40. Gharghan S., Mohammed S., Al-Naji A., Abu-AlShaeer M., Jawad H., Jawad A., et al. (2018). Accurate fall detection and localization for elderly people based on neural network and energy-efficient wireless sensor network. Energies 11, 2866. doi: 10.3390/en11112866

CrossRef Full Text | Google Scholar

41. Gholampooryazdi B., Singh I., and Sigg S. (2017). “5G ubiquitous sensing: passive environmental perception in cellular systems,” in 2017 IEEE 86th Vehicular Technology Conference (VTC-Fall) (Toronto: IEEE), 1–6. doi: 10.1109/VTCFall.2017.8288261

CrossRef Full Text | Google Scholar

42. Gia T. N., Sarker V. K., Tcarenko I., Rahmani A. M., Westerlund T., Liljeberg P., et al. (2018). Energy efficient wearable sensor node for iot-based fall detection systems. Microprocess. Microsyst. 56, 34–46. doi: 10.1016/j.micpro.2017.10.014

CrossRef Full Text | Google Scholar

43. Guo B., Zhang Y., Zhang D., and Wang Z. (2019). Special issue on device-free sensing for human behavior recognition. Pers. Ubiquit. Comput. 23, 1–2. doi: 10.1007/s00779-019-01201-8

CrossRef Full Text | Google Scholar

44. Habetha J. (2006). “The myheart project-fighting cardiovascular diseases by prevention and early diagnosis,” in Engineering in Medicine and Biology Society, 2006. EMBS'06. 28th Annual International Conference of the IEEE (New York, NY: IEEE), 6746–6749. doi: 10.1109/IEMBS.2006.260937

PubMed Abstract | CrossRef Full Text | Google Scholar

45. Han Q., Zhao H., Min W., Cui H., Zhou X., Zuo K., et al. (2020). A two-stream approach to fall detection with mobileVGG. IEEE Access 8, 17556–17566. doi: 10.1109/ACCESS.2019.2962778

CrossRef Full Text | Google Scholar

46. Hao Z., Duan Y., Dang X., and Xu H. (2019). “KS-fall: Indoor human fall detection method under 5GHZ wireless signals,” in IOP Conference Series: Materials Science and Engineering, Vol. 569 (Sanya: IOP Publishing), 032068. doi: 10.1088/1757-899X/569/3/032068

CrossRef Full Text | Google Scholar

47. Hori T., Nishida Y., Aizawa H., Murakami S., and Mizoguchi H. (2004). “Sensor network for supporting elderly care home,” in Sensors, 2004, Proceedings of IEEE (Vienna: IEEE), 575–578. doi: 10.1109/ICSENS.2004.1426230

CrossRef Full Text | Google Scholar

48. Hsieh S.-L., Chen C.-C., Wu S.-H., and Yue T.-W. (2014). “A wrist-worn fall detection system using accelerometers and gyroscopes,” in Proceedings of the 11th IEEE International Conference on Networking, Sensing and Control (Miami: IEEE), 518–523. doi: 10.1109/ICNSC.2014.6819680

CrossRef Full Text | Google Scholar

49. Huang Y., Chen W., Chen H., Wang L., and Wu K. (2019). “G-fall: device-free and training-free fall detection with geophones,” in 2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON) (Boston, MA: IEEE), 1–9. doi: 10.1109/SAHCN.2019.8824827

CrossRef Full Text | Google Scholar

50. Igual R., Medrano C., and Plaza I. (2013). Challenges, issues and trends in fall detection systems. Biomed. Eng. Online 12, 66. doi: 10.1186/1475-925X-12-66

PubMed Abstract | CrossRef Full Text | Google Scholar

51. Islam S. R., Kwak D., Kabir M. H., Hossain M., and Kwak K.-S. (2015). The internet of things for health care: a comprehensive survey. IEEE Access 3, 678–708. doi: 10.1109/ACCESS.2015.2437951

CrossRef Full Text | Google Scholar

52. Islam Z. Z., Tazwar S. M., Islam M. Z., Serikawa S., and Ahad M. A. R. (2017). “Automatic fall detection system of unsupervised elderly people using smartphone,” in 5th IIAE International Conference on Intelligent Systems and Image Processing 2017 (Hawaii), 5. doi: 10.12792/icisip2017.077

CrossRef Full Text | Google Scholar

53. Kangas M., Konttila A., Lindgren P., Winblad I., and Jämsä T. (2008). Comparison of low-complexity fall detection algorithms for body attached accelerometers. Gait Post. 28, 285–291. doi: 10.1016/j.gaitpost.2008.01.003

PubMed Abstract | CrossRef Full Text | Google Scholar

54. Kao H.-C., Hung J.-C., and Huang C.-P. (2017). “GA-SVM applied to the fall detection system,” in 2017 International Conference on Applied System Innovation (ICASI) (Sapporo: IEEE), 436–439. doi: 10.1109/ICASI.2017.7988446

CrossRef Full Text | Google Scholar

55. Kepski M., and Kwolek B. (2014). “Fall detection using ceiling-mounted 3D depth camera,” in 2014 International Conference on Computer Vision Theory and Applications (VISAPP), Vol. 2 (Lisbon: IEEE), 640–647.

PubMed Abstract | Google Scholar

56. Kerdjidj O., Ramzan N., Ghanem K., Amira A., and Chouireb F. (2020). Fall detection and human activity classification using wearable sensors and compressed sensing. J. Ambient Intell. Human. Comput. 11, 349–361. doi: 10.1007/s12652-019-01214-4

CrossRef Full Text | Google Scholar

57. Khojasteh S., Villar J., Chira C., González V., and de la Cal E. (2018). Improving fall detection using an on-wrist wearable accelerometer. Sensors 18:1350. doi: 10.3390/s18051350

PubMed Abstract | CrossRef Full Text | Google Scholar

58. Klenk J., Schwickert L., Palmerini L., Mellone S., Bourke A., Ihlen E. A., et al. (2016). The farseeing real-world fall repository: a large-scale collaborative database to collect and share sensor signals from real-world falls. Eur. Rev. Aging Phys. Activity 13:8. doi: 10.1186/s11556-016-0168-9

PubMed Abstract | CrossRef Full Text | Google Scholar

59. Ko M., Kim S., Kim M., and Kim K. (2018). A novel approach for outdoor fall detection using multidimensional features from a single camera. Appl. Sci. 8:984. doi: 10.3390/app8060984

CrossRef Full Text | Google Scholar

60. Kong Y., Huang J., Huang S., Wei Z., and Wang S. (2019). Learning spatiotemporal representations for human fall detection in surveillance video. J. Visual Commun. Image Represent. 59, 215–230. doi: 10.1016/j.jvcir.2019.01.024

CrossRef Full Text | Google Scholar

61. Kumar D. P., Yun Y., and Gu I. Y.-H. (2016). “Fall detection in RGB-D videos by combining shape and motion features,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (Shanghai: IEEE), 1337–1341. doi: 10.1109/ICASSP.2016.7471894

CrossRef Full Text | Google Scholar

62. Kumar S. V., Manikandan K., and Kumar N. (2014). “Novel fall detection algorithm for the elderly people,” in 2014 International Conference on Science Engineering and Management Research (ICSEMR) (Shanghai: IEEE), 1–3. doi: 10.1109/ICSEMR.2014.7043578

CrossRef Full Text | Google Scholar

63. Kwolek B., and Kepski M. (2014). Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Programs Biomed. 117, 489–501. doi: 10.1016/j.cmpb.2014.09.005

PubMed Abstract | CrossRef Full Text | Google Scholar

64. Kwolek B., and Kepski M. (2016). Fuzzy inference-based fall detection using kinect and body-worn accelerometer. Appl. Soft Comput. 40, 305–318. doi: 10.1016/j.asoc.2015.11.031

CrossRef Full Text | Google Scholar

65. LeCun Y., Bengio Y., and Hinton G. (2015). Deep learning. Nature 521, 436–444. doi: 10.1038/nature14539

PubMed Abstract | CrossRef Full Text | Google Scholar

66. Leff B. (1997). Persons found in their homes helpless or dead. J. Am. Geriatr. Soc. 45, 393–394. doi: 10.1111/j.1532-5415.1997.tb03788.x

CrossRef Full Text | Google Scholar

67. Li Q., Stankovic J. A., Hanson M. A., Barth A. T., Lach J., and Zhou G. (2009). “Accurate, fast fall detection using gyroscopes and accelerometer-derived posture information,” in 2009 Sixth International Workshop on Wearable and Implantable Body Sensor Networks (Berkeley, CA: IEEE), 138–143. doi: 10.1109/BSN.2009.46

CrossRef Full Text | Google Scholar

68. Li X., Nie L., Xu H., and Wang X. (2018). “Collaborative fall detection using smart phone and kinect,” in Mobile Networks and Applications, eds H. Janicke, D. Katsaros, T. J. Cruz, Z. M. Fadlullah, A.-S. K. Pathan, K. Singh et al. (Springer), 1–14. doi: 10.1007/s11036-018-0998-y

CrossRef Full Text | Google Scholar

69. Li Y., Banerjee T., Popescu M., and Skubic M. (2013). “Improvement of acoustic fall detection using kinect depth sensing,” in 2013 35th Annual International Conference of the IEEE Engineering in medicine and biology society (EMBC) (Osaka: IEEE), 6736–6739.

PubMed Abstract | Google Scholar

70. Liu L., Popescu M., Skubic M., and Rantz M. (2014). “An automatic fall detection framework using data fusion of Doppler radar and motion sensor network,” in 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Chicago, IL: IEEE), 5940–5943.

PubMed Abstract | Google Scholar

71. Lord C. J., and Colvin D. P. (1991). “Falls in the elderly: detection and assessment,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Orlando, FL: IEEE), 1938–1939.

Google Scholar

72. Lorincz K., Malan D. J., Fulford-Jones T. R., Nawoj A., Clavel A., Shnayder V., et al. (2004). Sensor networks for emergency response: challenges and opportunities. IEEE Pervas. Comput. 3, 16–23. doi: 10.1109/MPRV.2004.18

CrossRef Full Text | Google Scholar

73. Luprano J. (2006). “European projects on smart fabrics, interactive textiles: Sharing opportunities and challenges,” in Workshop Wearable Technol. Intel. Textiles (Helsinki).

74. Ma C., Shimada A., Uchiyama H., Nagahara H., and Taniguchi R.-i. (2019). Fall detection using optical level anonymous image sensing system. Optics Laser Technol. 110, 44–61. doi: 10.1016/j.optlastec.2018.07.013

CrossRef Full Text | Google Scholar

75. Ma X., Wang H., Xue B., Zhou M., Ji B., and Li Y. (2014). Depth-based human fall detection via shape features and improved extreme learning machine. IEEE J. Biomed. Health Inform. 18, 1915–1922. doi: 10.1109/JBHI.2014.2304357

PubMed Abstract | CrossRef Full Text | Google Scholar

76. Mahmud F., and Sirat N. S. (2015). Evaluation of three-axial wireless-based accelerometer for fall detection analysis. Int. J. Integr. Eng. 7, 15–20.

Google Scholar

77. Martínez-Villaseñor L., Ponce H., Brieva J., Moya-Albor E., Núñez-Martínez J., and Peñafort-Asturiano C. (2019). Up-fall detection dataset: a multimodal approach. Sensors 19:1988. doi: 10.3390/s19091988

PubMed Abstract | CrossRef Full Text | Google Scholar

78. Mastorakis G., Ellis T., and Makris D. (2018). Fall detection without people: a simulation approach tackling video data scarcity. Expert Syst. Appl. 112, 125–137. doi: 10.1016/j.eswa.2018.06.019

CrossRef Full Text | Google Scholar

79. Mastorakis G., Hildenbrand X., Grand K., and Makris D. (2007). Customisable fall detection: a hybrid approach using physics based simulation and machine learning. IEEE Trans. Biomed. Eng. 54, 1940–1950.

Google Scholar

80. Mastorakis G., and Makris D. (2014). Fall detection system using kinect's infrared sensor. J. Realtime Image Process. 9, 635–646. doi: 10.1007/s11554-012-0246-9

CrossRef Full Text | Google Scholar

81. Medrano C., Igual R., García-Magariño I., Plaza I., and Azuara G. (2017). Combining novelty detectors to improve accelerometer-based fall detection. Med. Biol. Eng. Comput. 55, 1849–1858. doi: 10.1007/s11517-017-1632-z

PubMed Abstract | CrossRef Full Text | Google Scholar

82. Min W., Yao L., Lin Z., and Liu L. (2018). Support vector machine approach to fall recognition based on simplified expression of human skeleton action and fast detection of start key frame using torso angle. IET Comput. Vis. 12, 1133–1140. doi: 10.1049/iet-cvi.2018.5324

CrossRef Full Text | Google Scholar

83. Namba T., and Yamada Y. (2018a). Fall risk reduction for the elderly by using mobile robots based on deep reinforcement learning. J. Robot. Network. Artif. Life 4, 265–269. doi: 10.2991/jrnal.2018.4.4.2

CrossRef Full Text | Google Scholar

84. Namba T., and Yamada Y. (2018b). Risks of deep reinforcement learning applied to fall prevention assist by autonomous mobile robots in the hospital. Big Data Cogn. Comput. 2:13. doi: 10.3390/bdcc2020013

CrossRef Full Text | Google Scholar

85. Niu K., Zhang F., Xiong J., Li X., Yi E., and Zhang D. (2018). “Boosting fine-grained activity sensing by embracing wireless multipath effects,” in Proceedings of the 14th International Conference on emerging Networking EXperiments and Technologies (Heraklion), 139–151. doi: 10.1145/3281411.3281425

CrossRef Full Text | Google Scholar

86. Nukala B., Shibuya N., Rodriguez A., Tsay J., Nguyen T., Zupancic S., et al. (2014). “A real-time robust fall detection system using a wireless gait analysis sensor and an artificial neural network,” in 2014 IEEE Healthcare Innovation Conference (HIC) (Seattle: IEEE), 219–222. doi: 10.1109/HIC.2014.7038914

CrossRef Full Text | Google Scholar

87. Ofli F., Chaudhry R., Kurillo G., Vidal R., and Bajcsy R. (2013). “Berkeley MHAD: a comprehensive multimodal human action database,” in 2013 IEEE Workshop on Applications of Computer Vision (WACV) (Clearwater Beach, FL: IEEE), 53–60. doi: 10.1109/WACV.2013.6474999

CrossRef Full Text | Google Scholar

88. Ozcan K., and Velipasalar S. (2016). Wearable camera-and accelerometer-based fall detection on portable devices. IEEE Embed. Syst. Lett. 8, 6–9. doi: 10.1109/LES.2015.2487241

CrossRef Full Text | Google Scholar

89. Ozcan K., Velipasalar S., and Varshney P. K. (2017). Autonomous fall detection with wearable cameras by using relative entropy distance measure. IEEE Trans. Hum. Mach. Syst. 47, 31–39. doi: 10.1109/THMS.2016.2620904

CrossRef Full Text | Google Scholar

90. Palipana S., Rojas D., Agrawal P., and Pesch D. (2018). Falldefi: ubiquitous fall detection using commodity wi-fi devices. Proc. ACM Interact. Mobile Wearable Ubiquit. Technol. 1, 1–25. doi: 10.1145/3161183

CrossRef Full Text | Google Scholar

91. Pandian P., Mohanavelu K., Safeer K., Kotresh T., Shakunthala D., Gopal P., et al. (2008). Smart vest: Wearable multi-parameter remote physiological monitoring system. Med. Eng. Phys. 30, 466–477. doi: 10.1016/j.medengphy.2007.05.014

PubMed Abstract | CrossRef Full Text | Google Scholar

92. Paradiso R., Loriga G., and Taccini N. (2005). A wearable health care system based on knitted integrated sensors. IEEE Trans. Inform. Technol. Biomed. 9, 337–344. doi: 10.1109/TITB.2005.854512

PubMed Abstract | CrossRef Full Text | Google Scholar

93. Pierleoni P., Belli A., Palma L., Pellegrini M., Pernini L., and Valenti S. (2015). A high reliability wearable device for elderly fall detection. IEEE Sens. J. 15, 4544–4553. doi: 10.1109/JSEN.2015.2423562

CrossRef Full Text | Google Scholar

94. Pister K., Hohlt B., Ieong I., Doherty L., and Vainio I. (2003). Ivy-a Sensor Network Infrastructure for the College of Engineering. Available online at: http://www-bsac.eecs.berkeley.edu/projects/ivy

95. Putra I., Brusey J., Gaura E., and Vesilo R. (2017). An event-triggered machine learning approach for accelerometer-based fall detection. Sensors 18, 20. doi: 10.3390/s18010020

PubMed Abstract | CrossRef Full Text | Google Scholar

96. Queralta J. P., Gia T., Tenhunen H., and Westerlund T. (2019). “Edge-AI in Lora-based health monitoring: fall detection system with fog computing and LSTM recurrent neural networks,” in 2019 42nd International Conference on Telecommunications and Signal Processing (TSP) (IEEE), 601–604. doi: 10.1109/TSP.2019.8768883

CrossRef Full Text | Google Scholar

97. Ray P. P. (2014). “Home health hub internet of things (H 3 IoT): an architectural framework for monitoring health of elderly people,” in 2014 International Conference on Science Engineering and Management Research (ICSEMR) (IEEE), 1–3. doi: 10.1109/ICSEMR.2014.7043542

CrossRef Full Text | Google Scholar

98. Rougier C., Auvinet E., Rousseau J., Mignotte M., and Meunier J. (2011a). “Fall detection from depth map video sequences,” in International Conference on Smart Homes and Health Telematics (Montreal: Springer), 121–128. doi: 10.1007/978-3-642-21535-3_16

CrossRef Full Text | Google Scholar

99. Rougier C., Meunier J., St-Arnaud A., and Rousseau J. (2011b). Robust video surveillance for fall detection based on human shape deformation. IEEE Trans. Circ. Syst. Video Technol. 21, 611–622. doi: 10.1109/TCSVT.2011.2129370

CrossRef Full Text | Google Scholar

100. Sabatini A. M., Ligorio G., Mannini A., Genovese V., and Pinna L. (2016). Prior-to-and post-impact fall detection using inertial and barometric altimeter measurements. IEEE Trans. Neural Syst. Rehabil. Eng. 24, 774–783. doi: 10.1109/TNSRE.2015.2460373

PubMed Abstract | CrossRef Full Text | Google Scholar

101. Saleh M., and Jeannés R. L. B. (2019). Elderly fall detection using wearable sensors: a low cost highly accurate algorithm. IEEE Sens. J. 19, 3156–3164. doi: 10.1109/JSEN.2019.2891128

CrossRef Full Text | Google Scholar

102. Schwickert L., Becker C., Lindemann U., Maréchal C., Bourke A., Chiari L., et al. (2013). Fall detection with body-worn sensors. Z. Gerontol. Geriatr. 46, 706–719. doi: 10.1007/s00391-013-0559-8

PubMed Abstract | CrossRef Full Text | Google Scholar

103. Senouci B., Charfi I., Heyrman B., Dubois J., and Miteran J. (2016). Fast prototyping of a SOC-based smart-camera: a real-time fall detection case study. J. Real Time Image Process. 12, 649–662. doi: 10.1007/s11554-014-0456-4

CrossRef Full Text | Google Scholar

104. Shi T., Sun X., Xia Z., Chen L., and Liu J. (2016). Fall detection algorithm based on triaxial accelerometer and magnetometer. Eng. Lett. 24:EL_24_2_06.

Google Scholar

105. Shibuya N., Nukala B. T., Rodriguez A., Tsay J., Nguyen T. Q., Zupancic S., et al. (2015). “A real-time fall detection system using a wearable gait analysis sensor and a support vector machine (SVM) classifier,” in 2015 Eighth International Conference on Mobile Computing and Ubiquitous Networking (ICMU) (IEEE), 66–67. doi: 10.1109/ICMU.2015.7061032

CrossRef Full Text | Google Scholar

106. Shojaei-Hashemi A., Nasiopoulos P., Little J. J., and Pourazad M. T. (2018). “Video-based human fall detection in smart homes using deep learning,” in 2018 IEEE International Symposium on Circuits and Systems (ISCAS) (Florence: IEEE), 1–5. doi: 10.1109/ISCAS.2018.8351648

CrossRef Full Text | Google Scholar

107. Spasova V., Iliev I., and Petrova G. (2016). Privacy preserving fall detection based on simple human silhouette extraction and a linear support vector machine. Int. J. Bioautomat. 20, 237–252.

Google Scholar

108. Stone E. E., and Skubic M. (2015). Fall detection in homes of older adults using the Microsoft Kinect. IEEE J. Biomed. Health Inform. 19, 290–301. doi: 10.1109/JBHI.2014.2312180

PubMed Abstract | CrossRef Full Text | Google Scholar

109. Sucerquia A., López J., and Vargas-Bonilla J. (2018). Real-life/real-time elderly fall detection with a triaxial accelerometer. Sensors 18:1101. doi: 10.3390/s18041101

PubMed Abstract | CrossRef Full Text | Google Scholar

110. Thilo F. J., Hahn S., Halfens R. J., and Schols J. M. (2019). Usability of a wearable fall detection prototype from the perspective of older people-a real field testing approach. J. Clin. Nurs. 28, 310–320. doi: 10.1111/jocn.14599

PubMed Abstract | CrossRef Full Text | Google Scholar

111. Tian Y., Lee G.-H., He H., Hsu C.-Y., and Katabi D. (2018). RF-based fall monitoring using convolutional neural networks. Proc. ACM Interact. Mobile Wearable Ubiquitous Technol. 2, 1–24. doi: 10.1145/3264947

CrossRef Full Text | Google Scholar

112. Tsinganos P., and Skodras A. (2018). On the comparison of wearable sensor data fusion to a single sensor machine learning technique in fall detection. Sensors 18, 592. doi: 10.3390/s18020592

PubMed Abstract | CrossRef Full Text | Google Scholar

113. Wang H., Zhang D., Wang Y., Ma J., Wang Y., and Li S. (2017a). RT-fall: a real-time and contactless fall detection system with commodity wifi devices. IEEE Trans. Mob. Comput. 16, 511–526. doi: 10.1109/TMC.2016.2557795

CrossRef Full Text | Google Scholar

114. Wang Y., Wu K., and Ni L. M. (2017b). Wifall: device-free fall detection by wireless networks. IEEE Trans. Mobile Comput. 16, 581–594. doi: 10.1109/TMC.2016.2557792

CrossRef Full Text | Google Scholar

115. WHO (2018). Falls. Available online at: https://www.who.int/news-room/fact-sheets/detail/falls

116. Williams G., Doughty K., Cameron K., and Bradley D. (1998). “A smart fall and activity monitor for telecare applications,” in Proceedings of the 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Vol. 20 Biomedical Engineering Towards the Year 2000 and Beyond (Cat. No. 98CH36286), Volume 3 (IEEE), 1151–1154. doi: 10.1109/IEMBS.1998.747074

CrossRef Full Text | Google Scholar

117. Wu F., Zhao H., Zhao Y., and Zhong H. (2015). Development of a wearable-sensor-based fall detection system. Int. J. Telemed. Appl. 2015:2. doi: 10.1155/2015/576364

PubMed Abstract | CrossRef Full Text | Google Scholar

118. Wu T., Gu Y., Chen Y., Xiao Y., and Wang J. (2019). A mobile cloud collaboration fall detection system based on ensemble learning. arXiv [Preprint]. arXiv:1907.04788.

Google Scholar

119. Xi X., Jiang W., Lü Z., Miran S. M., and Luo Z.-Z. (2020). Daily activity monitoring and fall detection based on surface electromyography and plantar pressure. Complexity. 2020:9532067. doi: 10.1155/2020/9532067

CrossRef Full Text | Google Scholar

120. Xi X., Tang M., Miran S. M., and Luo Z. (2017). Evaluation of feature extraction and recognition for activity monitoring and fall detection based on wearable SEMG sensors. Sensors 17, 1229. doi: 10.3390/s17061229

PubMed Abstract | CrossRef Full Text | Google Scholar

121. Xu T., Zhou Y., and Zhu J. (2018). New advances and challenges of fall detection systems: a survey. Appl. Sci. 8, 418. doi: 10.3390/app8030418

CrossRef Full Text | Google Scholar

122. Yang G. (2018). A Study on Autonomous Motion Planning of Mobile Robot by Use of Deep Reinforcement Learning for Fall Prevention in Hospita. Japan: JUACEP Indenpedent Research Report Nagoya University.

Google Scholar

123. Yang G.-Z., and Yang G. (2006). Body Sensor Networks. Springer. doi: 10.1007/1-84628-484-8

CrossRef Full Text | Google Scholar

124. Yang K., Ahn C. R., Vuran M. C., and Aria S. S. (2016). Semi-supervised near-miss fall detection for ironworkers with a wearable inertial measurement unit. Automat. Construct. 68, 194–202. doi: 10.1016/j.autcon.2016.04.007

CrossRef Full Text | Google Scholar

125. Yang S.-W., and Lin S.-K. (2014). Fall detection for multiple pedestrians using depth image processing technique. Comput. Methods Programs Biomed. 114, 172–182. doi: 10.1016/j.cmpb.2014.02.001

PubMed Abstract | CrossRef Full Text | Google Scholar

126. Yazar A., Erden F., and Cetin A. E. (2014). “Multi-sensor ambient assisted living system for fall detection,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-14) (Florence), 1–3.

Google Scholar

127. Yun Y., Innocenti C., Nero G., Lindén H., and Gu I. Y.-H. (2015). “Fall detection in RGB-D videos for elderly care,” in 2015 17th International Conference on E-health Networking, Application & Services (HealthCom) (Boston, MA: IEEE), 422–427.

Google Scholar

128. Zhang L., Wang C., Ma M., and Zhang D. (2019). Widigr: direction-independent gait recognition system using commercial wi-fi devices. IEEE Internet Things J. 7, 1178–1191. doi: 10.1109/JIOT.2019.2953488

CrossRef Full Text | Google Scholar

129. Zhang T., Wang J., Liu P., and Hou J. (2006). Fall detection by embedding an accelerometer in cellphone and using kfd algorithm. Int. J. Comput. Sci. Netw. Security 6, 277–284.

Google Scholar

130. Zhang Z., Conly C., and Athitsos V. (2014). “Evaluating depth-based computer vision methods for fall detection under occlusions,” in International Symposium on Visual Computing (Las Vegas: Springer), 196–207. doi: 10.1007/978-3-319-14364-4_19

CrossRef Full Text | Google Scholar

131. Zhang Z., Conly C., and Athitsos V. (2015). “A survey on vision-based fall detection,” in Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments (Las Vegas: ACM), 46. doi: 10.1145/2769493.2769540

CrossRef Full Text | Google Scholar

132. Zhao M., Li T., Abu Alsheikh M., Tian Y., Zhao H., Torralba A., et al. (2018). “Through-wall human pose estimation using radio signals,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Long Beach, CA), 7356–7365. doi: 10.1109/CVPR.2018.00768

CrossRef Full Text | Google Scholar

133. Zitouni M., Pan Q., Brulin D., and Campo E. (2019). Design of a smart sole with advanced fall detection algorithm. J. Sensor Technol. 9:71. doi: 10.4236/jst.2019.94007

CrossRef Full Text | Google Scholar

Keywords: fall detection, Internet of Things (IoT), information system, wearable device, ambient device, sensor fusion

Citation: Wang X, Ellul J and Azzopardi G (2020) Elderly Fall Detection Systems: A Literature Survey. Front. Robot. AI 7:71. doi: 10.3389/frobt.2020.00071

Received: 17 December 2019; Accepted: 30 April 2020;
Published: 23 June 2020.

Edited by:

Soumik Sarkar, Iowa State University, United States

Reviewed by:

Sambuddha Ghosal, Massachusetts Institute of Technology, United States
Carl K. Chang, Iowa State University, United States

Copyright © 2020 Wang, Ellul and Azzopardi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Xueyi Wang, xueyi.wang@rug.nl