Specialty Grand Challenge ARTICLE
A grand challenge for vision systems: improving the quality of life and care of aging adults
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
While computer vision has made much progress over the last decade, there is still a long way to go before it rivals the performance of a human child (Pankanti et al., 2011). Challenges in vision systems are therefore manifold. The choice of a grand challenge for vision systems dedicated to improving the quality of life and care of aging adults is due to its high relevance for current and future society facing enormous demographic changes. According to the Strategic Research Agenda of the Joint Programming Initiative “More Years and Better Lives,” 65-year-olds are likely to be healthier and more active than their parents were at the same age, and the proportion of people aged over 80 is rising rapidly. This may help offset the effects of aging in some counties or regions, but it brings its own challenges. Precisely, one of these challenges is to enhance the quality of life and care of older people (Bowling and Brazier, 1995).
Quality of life is defined as the perceived quality of the daily life of an individual and the appreciation of being a human being. It integrates emotional, social, and physical aspects of a person’s life. Moreover, the quality of life and care also assesses how illness or disability affects welfare, therefore adequate individual monitoring is essential.
It is widely recognized that aging people generally prefer living in their own homes to other options (e.g., Pynoos et al., 2010; Carswell, 2012; Vasunilashorn et al., 2012). However, it is difficult to provide the necessary security and home care without monitoring, especially when the old person encounters difficulties. Vision systems, supplemented by other sensing systems, can provide a solution to several problems that the elderly face at home. Vision Systems Theory, Tools, and Applications was born with the aim to collect valuable scientific and technological proposals in this ever-growing field of robust people monitoring.
Vision Systems to Monitor the Emotional State of Aging People
Automated monitoring of emotional states is among the most important tools in the areas of health sciences and rehabilitation, clinical psychology, psychiatry, and gerontology. Current research in wireless area networks (WSNs) (Chen et al., 2011) and body area networks (Jovanov and Milenkovic, 2011) has enabled the inclusion of advanced monitoring devices in this context. It is important to consider multisensory approaches as well, which combine multiple sources of information presented by various sensors to generate a more accurate and robust interpretation of a given situation (Pavón et al., 2008). Ultimately, a semantic interpretation from multi-sensed and video-controlled environments (Gascueña and Fernández-Caballero, 2011) is required for the recognition of specific situations and activities (Fernández-Caballero et al., 2013). More concretely, our challenge is to apply smart techniques to detect the mood and activity of those living alone in their homes (Bartholmai et al., 2013).
The ability to monitor changes in the emotional state of a person in his/her own context should enable the implementation of regulatory strategies in order to reduce negative affective states. Emotion interpretation in humans has traditionally been an area of interest in disciplines such as psychology and sociology and, until now, only very few real-world applications relate emotions to human behavior. Discovering emotional states in a machine-vision-based manner covers all aspects of vision systems from the low-level processes of early vision to the high-level processes of recognition and interpretation.
Currently, the least intrusive process of automatic emotion recognition is based on the study of facial expression. The Facial Action Coding System (Ekman et al., 2002) encodes all possible facial expressions as unitary actions (AUs), which may occur individually or in combination. In fact, a set of AUs generally describes facial expressions associated with emotions (Soleymani et al., 2012).
For emotion interpretation and further affect regulation, Ambient Intelligence (AmI) is a key element (Acampora and Vitiello, 2013). Indeed, AmI proposes to create intelligent environments to suit the needs, personal taste, and interests of people that live in them. Moreover, AmE, the so-called emotion-aware version of AmI, exploits concepts from psychology and social sciences to adequately analyze the state of an individual and enrich the results with the help of contextual information. AmE achieves this objective by extending the AmI devices through a collection of improved sensors able to recognize human emotion from facial expression (Susskind et al., 2007), hand gestures, and other body movements indicative of human behavior (Silva et al., 2006; Vogt et al., 2008).
Affective robotics is another fundamental area to consider when we aim at improving the quality of life and care of aging adults (Broekens et al., 2009). Affective robotics is a research field that addresses the interaction between robots and people by using emotions or emotional-related concepts (Esposito et al., 2014). There is no doubt that pet-like robots equipped with vision systems offer an opportunity to detect emotional states of the elderly (Klein et al., 2013).
Improving the quality of life and care of aging adults is a flagship challenge of vision systems research due to its societal relevance and due to the fact that integrates all the current areas of development in terms of tools and applications of vision systems: AmI, smart environments, affective robotics, and human–machine interaction.
Conflict of Interest Statement
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This work is partially supported by Spanish Ministerio de Economía y Competitividad/FEDER under TIN2013-47074-C2-1-R grant.
Bartholmai, M., Koeppe, E., and Neumann, P. P. (2013). “Monitoring of hazardous scenarios using multi-sensor devices,” in The Fourth International Conference on Sensor Device Technologies and Applications, (Barcelona: IARIA), 9–13.
Ekman, P., Friesen, W. V., and Hager, J. C. (2002). The New Facial Action Coding System. Available at: http://www.paulekman.com/product/facs-manual/
Fernández-Caballero, A., Castillo, J. C., López, M. T., Serrano-Cuerda, J., and Sokolova, M. V. (2013). INT3-Horus framework for multispectrum activity interpretation in intelligent environments. Expert Syst. Appl. 40, 6715–6727. doi:10.1016/j.eswa.2013.06.058
Gascueña, J. M., and Fernández-Caballero, A. (2011). On the use of agent technology in intelligent, multi-sensory and distributed surveillance. Knowl. Eng. Rev. 26, 191–208. doi:10.1017/S0269888911000026
Pankanti, S., Brown, L., Connell, J., Datta, A., Fan, Q., Feris, R. S., et al. (2011). Practical computer vision: example techniques and challenges. IBM J. Res. Dev. 55, 3:1–3:12. doi:10.1118/1.4811136
Pavón, J., Gómez-Sanz, J. J., Fernández-Caballero, A., and Valencia-Jiménez, J. J. (2008). Development of intelligent multi-sensor surveillance systems with agents. Rob. Auton. Syst. 55, 892–903. doi:10.1016/j.robot.2007.07.009
Pynoos, J., Cicero, C., and Nishita, C. (2010). “New challenges and growing trends in senior housing,” in The New Politics of Old Age Policy, ed. R. B. Hudson (Baltimore, MA: The Johns Hopkins University Press), 324–336.
Silva, P. R. D., Osano, M., Marasinghe, A., and Madurapperuma, A. P. (2006). “Towards recognizing emotion with affective dimensions through body gestures,” in Seventh IEEE International Conference on Automatic Face and Gesture Recognition, (Washington, DC: IEEE Computer Society), 269–274.
Soleymani, M., Lichtenauer, J., Pun, T., and Pantic, M. (2012). A multi-modal affective database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. 3, 42–55. doi:10.1109/T-AFFC.2011.25
Susskind, J. M., Littlewort, G., Bartlett, M. S., Movellan, J., and Anderson, A. K. (2007). Human and computer recognition of facial expressions of emotion. Neuropsychologia 45, 152–162. doi:10.1016/j.neuropsychologia.2006.05.001
Vogt, T., André, E., and Wagner, J. (2008). “Affect and emotion in human-computer interaction,” in Automatic Recognition of Emotions from Speech: A Review of the Literature and Recommendations for Practical Realisation, eds C. Peter and R. Beale (Heidelberg: Springer-Verlag), 75–91.
Keywords: quality of life and care, ageing adults, vision systems, emotion detection, ambient intelligence, affective robotics
Citation: Fernández-Caballero A (2015) A grand challenge for vision systems: improving the quality of life and care of aging adults. Front. Robot. AI 2:15. doi: 10.3389/frobt.2015.00015
Received: 11 May 2015; Accepted: 02 June 2015;
Published: 16 June 2015
Edited by:Hugo Pedro Proença, University of Beira Interior, Portugal
Reviewed by:Nuno Pombo, University of Beira Interior, Portugal
Copyright: © 2015 Fernández-Caballero. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Antonio Fernández-Caballero, email@example.com