%A Cooney,Martin %A Bigun,Josef %D 2017 %J Frontiers in Robotics and AI %C %F %G English %K Intention recognition,Thermo-visual,Home robots,Action recognition,Monitoring,Medicine Intak %Q %R 10.3389/frobt.2017.00061 %W %L %M %P %7 %8 2017-November-20 %9 Original Research %+ Martin Cooney,Intelligent Systems Laboratory, Halmstad University,Sweden,martin.daniel.cooney@gmail.com %# %! PastVision+: Thermo-visual Inference of Unobserved Recent Events %* %< %T PastVision+: Thermovisual Inference of Recent Medicine Intake by Detecting Heated Objects and Cooled Lips %U https://www.frontiersin.org/articles/10.3389/frobt.2017.00061 %V 4 %0 JOURNAL ARTICLE %@ 2296-9144 %X This article addresses the problem of how a robot can infer what a person has done recently, with a focus on checking oral medicine intake in dementia patients. We present PastVision+, an approach showing how thermovisual cues in objects and humans can be leveraged to infer recent unobserved human–object interactions. Our expectation is that this approach can provide enhanced speed and robustness compared to existing methods, because our approach can draw inferences from single images without needing to wait to observe ongoing actions and can deal with short-lasting occlusions; when combined, we expect a potential improvement in accuracy due to the extra information from knowing what a person has recently done. To evaluate our approach, we obtained some data in which an experimenter touched medicine packages and a glass of water to simulate intake of oral medicine, for a challenging scenario in which some touches were conducted in front of a warm background. Results were promising, with a detection accuracy of touched objects of 50% at the 15 s mark and 0% at the 60 s mark, and a detection accuracy of cooled lips of about 100 and 60% at the 15 s mark for cold and tepid water, respectively. Furthermore, we conducted a follow-up check for another challenging scenario in which some participants pretended to take medicine or otherwise touched a medicine package: accuracies of inferring object touches, mouth touches, and actions were 72.2, 80.3, and 58.3% initially, and 50.0, 81.7, and 50.0% at the 15 s mark, with a rate of 89.0% for person identification. The results suggested some areas in which further improvements would be possible, toward facilitating robot inference of human actions, in the context of medicine intake monitoring.