EDITORIAL article

Front. Comput. Sci., 26 May 2022

Sec. Computer Vision

Volume 4 - 2022 | https://doi.org/10.3389/fcomp.2022.937433

Editorial: Machine Vision for Assistive Technologies

  • 1. Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, Lecce, Italy

  • 2. Department of Mathematics and Computer Science, University of Catania, Catania, Italy

  • 3. Institute of Robotics and Intelligent Systems, University of Southern California, Los Angeles, CA, United States

Article metrics

View details

3

Citations

1,6k

Views

469

Downloads

The last decade has witnessed the significant impact of Computer Vision and Robotics on real-world products. The traditional Computer Vision problems such as tracking, 3D reconstruction, detection, recognition, odometry, navigation, and ultimately, are now solved with significantly higher accuracy using Machine Learning (Farinella et al., 2020). However, most of these results have focused on constrained application scenarios that do not involve the integration of feedback from the user (Leo et al., 2019). Since these applications do not consider the user's intentions and goals, they tend to be of limited use when it is necessary to assist humans.

With the pervasive successes of Computer Vision and Robotics and the advent of industry 4.0, it has become paramount to design systems that can truly assist humans and augment their abilities to tackle both physical and intellectual tasks. We broadly refer to such systems as “assistive technologies” (Leo et al., 2017). Examples of these technologies include approaches to assist visually impaired people to navigate and perceive the world, wearable devices which make use of artificial intelligence, mixed and augmented reality to improve perception and bring computation directly to the user, and systems designed to aid industrial processes and improve the safety of workers (Leo and Farinella, 2018). These technologies need to consider an operational paradigm in which the user is central and can both influence and be influenced by the system. Despite some examples of this approach exist (Fosch-Villaronga et al., 2021), implementing applications according to this “human-in-the-loop” scenario still requires a lot of effort to reach an adequate level of reliability and introduces challenging satellite issues related to usability, privacy, and acceptability.

The main aim of this Research Topic was to gather contributions from the diverse fields of engineering and computer science in the context of technologies involving Computer Vision and Robotics related to real-time continuous assistance and support of humans while performing any task.

At the end of a double-blind review process that involved distinguished researchers from industry and academia, four papers were accepted.

The first paper (sorted by acceptance date) is titled “Communicating Photograph Content Through Tactile Images to People With Visual Impairments (Pakenaite et al.).” It introduces an approach to make visual content accessible via touch. State-of-the-art algorithms are used to automatically process an input photograph into a collage of icons that depict the most important semantic aspects of a scene. This collage is then printed onto swell paper allowing this way visually impaired people to access photographs and better enjoy books, tourist brochures, etc.

The paper “Deep-Learning-Based Cerebral Artery Semantic Segmentation in Neurosurgical Operating Microscope Vision Using Indocyanine Green Fluorescence Videoangiography” proved the feasibility of operating field view of the cerebral artery segmentation using deep learning, and the effectiveness of the automatic blood vessel ground truth generation method using ICG fluorescence videoangiography (Kim et al.).

The paper “Environment Classification for Robotic Leg Prostheses and Exoskeletons Using Deep Convolutional Neural Networks” deals with Robotic leg prostheses and exoskeletons that can provide powered locomotor assistance to older adults and/or persons with physical disabilities (Laschowski et al.). Inspired by the human vision-locomotor control system, the authors developed an environment classification system powered by computer vision and deep learning to predict the oncoming walking environments prior to physical interaction, therein allowing for more accurate and robust high-level control decisions.

The last paper “Recognition and Classification of Ship Images Based on SMS-PCNN Model” lies in the field of ship images recognition and classification (Wang et al.). In order to extract the ship features of different scales, the authors proposed a Multi-Scale paralleling CNN that has three characteristics. (1) Extracting image features of different sizes by parallelizing convolutional branches with different receptive fields; (2) the number of channels of the model is adjusted twice to extract features and eliminate redundant information; (3) The residual connection network is used to extend the network depth and mitigate the gradient disappearance.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Statements

Author contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  • 1

    FarinellaG. M.LeoM.MedioniG.TrivediM. (2020). Learning and recognition for assistive computer vision. Pattern Recogn. Lett.137, 12. 10.1016/j.patrec.2019.11.006

  • 2

    Fosch-VillarongaE.KhannaP.DrukarchH.CustersB. H. (2021). A human in the loop in surgery automation. Nat. Mach. Intell.3, pp.368369. 10.1038/s42256-021-00349-4

  • 3

    LeoM.FarinellaG. M. (2018). Computer Vision for Assistive Healthcare. Cambridge, MA: Academic Press.

  • 4

    LeoM.FurnariA.MedioniG. G.TrivediM.FarinellaG. M. (2019). “Deep learning for assistive computer vision,” in Computer Vision – ECCV 2018 Workshops. Lecture Notes in Computer Science, Vol. 11134, eds L. Leal-Taixé, and S. Roth (Cham: Springer). 10.1007/978-3-030-11024-6_1

  • 5

    LeoM.MedioniG.TrivediM.KanadeK.FarinellaG. M. (2017). Computer vision for assistive technologies. Comp. Vis. Image Understand.154, 115. 10.1016/j.cviu.2016.09.001

Summary

Keywords

assistive technologies, egocentric vision, industrial applications, human-robot interaction, symbiotic human-machine systems, editorial

Citation

Leo M, Farinella GM, Furnari A and Medioni G (2022) Editorial: Machine Vision for Assistive Technologies. Front. Comput. Sci. 4:937433. doi: 10.3389/fcomp.2022.937433

Received

06 May 2022

Accepted

16 May 2022

Published

26 May 2022

Volume

4 - 2022

Edited and reviewed by

Marcello Pelillo, Ca' Foscari University of Venice, Italy

Updates

Copyright

*Correspondence: Marco Leo

This article was submitted to Computer Vision, a section of the journal Frontiers in Computer Science

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics