Skip to main content

EDITORIAL article

Front. Comput. Sci., 26 May 2022
Sec. Computer Vision
This article is part of the Research Topic Machine Vision for Assistive Technologies View all 5 articles

Editorial: Machine Vision for Assistive Technologies

  • 1Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, Lecce, Italy
  • 2Department of Mathematics and Computer Science, University of Catania, Catania, Italy
  • 3Institute of Robotics and Intelligent Systems, University of Southern California, Los Angeles, CA, United States

Editorial on the Research Topic
Machine Vision for Assistive Technologies

The last decade has witnessed the significant impact of Computer Vision and Robotics on real-world products. The traditional Computer Vision problems such as tracking, 3D reconstruction, detection, recognition, odometry, navigation, and ultimately, are now solved with significantly higher accuracy using Machine Learning (Farinella et al., 2020). However, most of these results have focused on constrained application scenarios that do not involve the integration of feedback from the user (Leo et al., 2019). Since these applications do not consider the user's intentions and goals, they tend to be of limited use when it is necessary to assist humans.

With the pervasive successes of Computer Vision and Robotics and the advent of industry 4.0, it has become paramount to design systems that can truly assist humans and augment their abilities to tackle both physical and intellectual tasks. We broadly refer to such systems as “assistive technologies” (Leo et al., 2017). Examples of these technologies include approaches to assist visually impaired people to navigate and perceive the world, wearable devices which make use of artificial intelligence, mixed and augmented reality to improve perception and bring computation directly to the user, and systems designed to aid industrial processes and improve the safety of workers (Leo and Farinella, 2018). These technologies need to consider an operational paradigm in which the user is central and can both influence and be influenced by the system. Despite some examples of this approach exist (Fosch-Villaronga et al., 2021), implementing applications according to this “human-in-the-loop” scenario still requires a lot of effort to reach an adequate level of reliability and introduces challenging satellite issues related to usability, privacy, and acceptability.

The main aim of this Research Topic was to gather contributions from the diverse fields of engineering and computer science in the context of technologies involving Computer Vision and Robotics related to real-time continuous assistance and support of humans while performing any task.

At the end of a double-blind review process that involved distinguished researchers from industry and academia, four papers were accepted.

The first paper (sorted by acceptance date) is titled “Communicating Photograph Content Through Tactile Images to People With Visual Impairments (Pakenaite et al.).” It introduces an approach to make visual content accessible via touch. State-of-the-art algorithms are used to automatically process an input photograph into a collage of icons that depict the most important semantic aspects of a scene. This collage is then printed onto swell paper allowing this way visually impaired people to access photographs and better enjoy books, tourist brochures, etc.

The paper “Deep-Learning-Based Cerebral Artery Semantic Segmentation in Neurosurgical Operating Microscope Vision Using Indocyanine Green Fluorescence Videoangiography” proved the feasibility of operating field view of the cerebral artery segmentation using deep learning, and the effectiveness of the automatic blood vessel ground truth generation method using ICG fluorescence videoangiography (Kim et al.).

The paper “Environment Classification for Robotic Leg Prostheses and Exoskeletons Using Deep Convolutional Neural Networks” deals with Robotic leg prostheses and exoskeletons that can provide powered locomotor assistance to older adults and/or persons with physical disabilities (Laschowski et al.). Inspired by the human vision-locomotor control system, the authors developed an environment classification system powered by computer vision and deep learning to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions.

The last paper “Recognition and Classification of Ship Images Based on SMS-PCNN Model” lies in the field of ship images recognition and classification (Wang et al.). In order to extract the ship features of different scales, the authors proposed a Multi-Scale paralleling CNN that has three characteristics. (1) Extracting image features of different sizes by parallelizing convolutional branches with different receptive fields; (2) the number of channels of the model is adjusted twice to extract features and eliminate redundant information; (3) The residual connection network is used to extend the network depth and mitigate the gradient disappearance.

Author Contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Farinella, G. M., Leo, M., Medioni, G., and Trivedi, M. (2020). Learning and recognition for assistive computer vision. Pattern Recogn. Lett. 137, 1–2. doi: 10.1016/j.patrec.2019.11.006

CrossRef Full Text | Google Scholar

Fosch-Villaronga, E., Khanna, P., Drukarch, H., and Custers, B. H. (2021). A human in the loop in surgery automation. Nat. Mach. Intell. 3, pp.368–369. doi: 10.1038/s42256-021-00349-4

CrossRef Full Text | Google Scholar

Leo, M., and Farinella, G. M. (2018). Computer Vision for Assistive Healthcare. Cambridge, MA: Academic Press.

Google Scholar

Leo, M., Furnari, A., Medioni, G. G., Trivedi, M., and Farinella, G. M. (2019). “Deep learning for assistive computer vision,” in Computer Vision – ECCV 2018 Workshops. Lecture Notes in Computer Science, Vol. 11134, eds L. Leal-Taixé, and S. Roth (Cham: Springer). doi: 10.1007/978-3-030-11024-6_1

CrossRef Full Text | Google Scholar

Leo, M., Medioni, G., Trivedi, M., Kanade, K., and Farinella, G. M. (2017). Computer vision for assistive technologies. Comp. Vis. Image Understand. 154, 1–15. doi: 10.1016/j.cviu.2016.09.001

CrossRef Full Text | Google Scholar

Keywords: assistive technologies, egocentric vision, industrial applications, human-robot interaction, symbiotic human-machine systems, editorial

Citation: Leo M, Farinella GM, Furnari A and Medioni G (2022) Editorial: Machine Vision for Assistive Technologies. Front. Comput. Sci. 4:937433. doi: 10.3389/fcomp.2022.937433

Received: 06 May 2022; Accepted: 16 May 2022;
Published: 26 May 2022.

Edited and reviewed by: Marcello Pelillo, Ca' Foscari University of Venice, Italy

Copyright © 2022 Leo, Farinella, Furnari and Medioni. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Marco Leo, marco.leo@cnr.it

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.